Название: Software Networks
Автор: Guy Pujolle
Издательство: John Wiley & Sons Limited
Жанр: Отраслевые издания
isbn: 9781119694724
isbn:
Finally, there is one last reason to favor migration to a new network: security. Security requires a precise view and understanding of the problems at hand, which range from physical security to computer security, with the need to lay contingency plans for attacks that are sometimes entirely unforeseeable. The world of the Internet today is like a bicycle tire which is made up entirely of patches (having been punctured and repaired numerous times). Every time an attack succeeds, a new patch is added. Such a tire is still roadworthy at the moment, but there is a danger that it will burst if no new solution is envisaged in the next few years. Near the end of this book, in Chapter 15, we will look at the secure Cloud, whereby, in a datacenter, a whole set of solutions is built around specialized virtual machines to provide new elements, the aim of which is to enhance the security of the applications and networks.
An effective security mechanism must include a physical element: a safe box to protect the important elements of the arsenal, necessary to ensure confidentiality, authentication, etc. Software security is a reality, and to a certain extent, may be sufficient for numerous applications. However, secure elements can always be circumvented when all of the defenses are software-based. This means that, for new generations, there must be a physical element, either local or remote. This hardware element is a secure microprocessor known as a “secure element”. A classic example of this type of device is the smartcard, used particularly prevalently by telecom operators and banks.
Depending on whether it belongs to the world of business or public electronics, the secure element may be found in the terminal, near to it or far away from the terminal. We will examine the different solutions in the subsequent chapters of this book.
Virtualization also has an impact on security: the Cloud, with specialized virtual machines, means that attackers have remarkable striking force at their disposal. In the last few years, hackers’ ability to break encryption algorithms has increased by a factor of 106.
Another important point that absolutely must be integrated in networks is “intelligence”. So-called “intelligent networks” have had their day, but the intelligence in this case was not really what we mean by “intelligence” in this field. Rather, it was a set of automatic mechanisms, used to deal with problems perfectly defined in advance, such as a signaling protocol for providing additional features in the telephone system. In the new generation of networks, intelligence pertains to learning mechanisms and intelligent decisions based on the network status and user requests. The network needs to become an intelligent system, which is capable of making decisions on its own. One solution to help move in this direction was introduced by IBM in the early 2000s: “autonomic”. “Autonomic” means autonomous and spontaneous – autonomous in the sense that every device in the network must be able to independently make decisions with knowledge of the situated view, i.e. the state of the nodes surrounding it within a certain number of hops. The solutions that have been put forward to increase the smartness of the networks are influenced by Cloud technology. We will discuss them in detail in the chapter about MEC (Mobile Edge Computing) and more generally about “smart edge” (Chapter 5).
Finally, one last point, which could be viewed as the fourth revolution, is concretization – i.e. the opposite of virtualization. Indeed, the problem with virtualization is a significant reduction in performance, stemming from the replacement of hardware with software. There is a variety of solutions that have been put forward to regain the performance: software accelerators and, in particular, the replacement of software with hardware, in the step of concretization. The software is replaced by reconfigurable hardware, which can transform depending on the software needing to be executed. This approach is likely to create morphware networks, which will be described in Chapter 16.
I.4. Conclusion
The world of networks is changing greatly, for the reasons listed above. It is changing more quickly than might have been expected a few years ago. A suggestion to redefine networks architecture was put forward, but failed: starting again from scratch. This is known as the “Clean Slate Approach”: forgetting about everything we know and start over. Unfortunately, no concrete proposition has been adopted, and the transfer of IP packets continues to be the solution for data transport. However, in the numerous propositions, virtualization and the Cloud are the two main avenues which are widely used today and upon which this book focuses.
1
Virtualization
In this chapter, we introduce virtualization, which is at the root of the revolution in the networking world, as it involves constructing software networks to replace hardware networks.
Figure 1.1 shows the process of virtualization. We simply need to write a code that performs exactly the same function as the hardware component. With only a few exceptions, which we will explore later on, all hardware machines can be transformed into software machines. The basic problem associated with virtualization is the significant reduction in performance. On average (although the reality is extremely diverse), virtualization reduces performance by a factor of 100: i.e. the resulting software, executed on a machine similar to the machine that has been virtualized, runs 100 times more slowly. In order to recover from this loss of performance, we simply need to run the program on a machine that is 100 times more powerful. This power is to be found in the datacenters hosted in Cloud environments that are under development in all corners of the globe.
It is not possible to virtualize a certain number of elements, such as an antenna or a sensor, since there is no piece of software capable of picking up electromagnetic signals or detecting temperature. Thus, we still need to keep hardware elements such as the metal wires and optical links or the transmission/reception ports of a router and a switch. Nevertheless, all of the signal-processing operations can be virtualized perfectly well. Increasingly, we find virtualization in wireless systems.
In order to speed up the software processing, one solution would be to move to a mode of concretization, i.e. the reverse of virtualization, but with one very significant difference: the hardware must behave like a software. It is possible to replace the software, which is typically executed on a general machine, with a machine that can be reconfigured almost instantly, and thus behaves like a software program. The components used are derived from FPGAs (Field-Programmable Gate Arrays) and, more generally, reconfigurable microprocessors. A great deal of progress still needs to be made in order to obtain extremely fast concretizations, but this is only a question of a few years.
The virtualization of networking equipment means we can replace the hardware routers with software routers, and do the same for any other piece of hardware that could be made into software, such as switches, LSRs (Label Switching Routers), firewalls, diverse and varied boxes, DPI (Deep Packet Inspection), SIP servers, IP PBXs, etc. These new machines are superior in a number of ways. To begin with, one advantage is their flexibility. Let us look at the example given in Figure 1.1, where three hardware routers have been integrated in software form on a single server. The size of the three virtual routers can change depending on their workload. The router uses little resources at night-time when there is little traffic, and very large resources at peak times in order to handle all the traffic.