Software Networks. Guy Pujolle
Чтение книги онлайн.

Читать онлайн книгу Software Networks - Guy Pujolle страница 13

Название: Software Networks

Автор: Guy Pujolle

Издательство: John Wiley & Sons Limited

Жанр: Отраслевые издания

Серия:

isbn: 9781119694724

isbn:

СКАЧАТЬ are redistributed. This enables us to maintain isolation while still sharing the hardware resources. In addition, we can attach a certain priority to a software network while preserving the isolation, by allowing that particular network to spend its tokens as a matter of priority over the other networks. This is relative priority, because each network can, at any moment, recoup its basic resources. However, the priority can be accentuated by distributing any excess resources to the priority networks, which will then always have a token available to handle a packet. Of course, isolation requires other characteristics of the hypervisors and the virtualization techniques, which we will not discuss in this book.

      All devices can be virtualized, with the exception of those which handle the reception of terrestrial and wireless signals, such as electromagnetic signals or atmospheric pressure. For example, an antenna or thermometer could not be replaced by a piece of software. However, the signal received by that antenna or thermometer can be processed by a virtual machine. A sensor picking up a signal can select an appropriate virtual processing machine in order to achieve a result that is appropriate for the demand. One single antenna might, for example, receive signals from a Wi-Fi terminal as well as signals from a 4G terminal. On the basis of the type of signal, an initial virtual machine determines which technology is being used, and sends the signal to the virtual machine needed for its processing. This is known as SDR (Software-Defined Radio), which is becoming increasingly widely used, and enables us to delocalize the processing operation to a datacenter.

      The networking machines that we know can always be virtualized, either completely or at least partially. A partial virtualization can correspond to the processing part, the control part or the management part. Thus, today, we can uncouple a physical machine that, in the past, was unique, into several different machines – one of them physical (e.g. a transceiver broadcasting along a metal cable) and the others virtual. One of the advantages of this uncoupling is that we can deport the virtual parts onto other physical machines for execution. This means that we can adapt the power of the resources to the results we wish to obtain. Operations originating on different physical machines can be multiplexed onto the same software machine executing on a single physical server. This solution helps us to economize on the overall cost of the system, as well as on the energy expended, by grouping together the necessary power using a single machine that is much more powerful and more economical.

      Today, all legacy machines in the world of networking have either been virtualized already or are in the process of being virtualized – Nodes-B for processing the signals from 3G, 4G and 5G mobile networks, HLRs and VLRs, routers, switches, different types of routers/switches such as those of MPLS, firewalls, authentication or identity management servers, etc. In addition, these virtual machines can be partitioned so they execute on several physical machines in parallel.

      Another interesting application of virtualization is expanding. It is about digital twins. A hardware is associated with a virtual machine executed in a datacenter located either near or far from the hardware. The virtual machine executes exactly what the hardware does. Obviously, the hardware must supply the virtual machine with power when there is a change in parameters. The virtual machine should produce the same results as the hardware. If results are not similar, this shows a dysfunction from the hardware, and this dysfunction can be studied in real time on the virtual machine. This solution makes it possible to spot malfunctions in real-time and, in most cases, to correct them.

      Examples of digital twins are being used or developed just like a plane engine twin that is executed in a datacenter. Similarly, soon, vehicles will have a twin, allowing us to detect malfunctions or to understand an accident. Manufacturers are developing digital twins for objects, but in this case, the digital twin’s power can be much bigger and it can perform actions which the object is not powerful enough to perform.

      Scientists dream of human digital twins which could keep working while the human sleeps.

      Virtualization is the fundamental property of the new generation of networks, where we make the move from hardware to software. While there is a noticeable reduction in performance at the start, it is compensated by more powerful, less costly physical machines. Nonetheless, the opposite move to virtualization is crucial: that of concretization, i.e. enabling the software to be executed on reconfigurable machines so that the properties of the software are retained and top-of-the-range performances can again be achieved.

      Software networks form the backbone of the new means of data transport. They are agile, simple to implement and not costly. They can be modified or changed at will. Virtualization also enables us to uncouple functions and to use shared machines to host algorithms, which offers substantial savings in terms of resources and of qualified personnel.

      2

      SDN (Software-Defined Networking)

      The SDN (Software-Defined Networking) technology is at the heart of this book. It was introduced with strong control centralization and virtualization, enabling physical networking devices to be transformed into software. Associated with this definition, a new architecture has been defined: it decouples the data level from the control level. Up until now, forwarding tables have been computed in a distributed manner by each router or switch. In the new architecture, the computations for optimal control are performed by a different device, called the controller. Generally, the controller is centralized, but it could perfectly well be distributed. Before taking a closer look at this architecture, let us examine the reasons for this new paradigm.

      The limitations of traditional architectures are becoming significant: at present, modern networks no longer optimize the costs at all (i.e. the CAPEX and OPEX). The networks are not agile. The time to market is much too long, and the provisioning techniques are not fast enough. In addition, the networks are completely unconnected to the services. The following points need to be taken into account in the new SDN paradigm:

       – overall needs analysis;

       – dynamic, rather than static, configuration;

       – dynamic, rather than static, policies used;

       – much greater information feedback than is the case at present;

       – precise knowledge of the client and of his/her applications, and, more generally, his/her requirements.

      The objective of SDN (Software-Defined СКАЧАТЬ