Digital Transformation for Chiefs and Owners. Volume 1. Immersion. Dzhimsher Chelidze
Чтение книги онлайн.

Читать онлайн книгу Digital Transformation for Chiefs and Owners. Volume 1. Immersion - Dzhimsher Chelidze страница 13

Название: Digital Transformation for Chiefs and Owners. Volume 1. Immersion

Автор: Dzhimsher Chelidze

Издательство: Издательские решения

Жанр:

Серия:

isbn: 9785006410169

isbn:

СКАЧАТЬ Companies can collect a huge amount of information, the size of which becomes a critical factor in analytics.

      – Velocity (speed): data must be updated, otherwise they become obsolete and lose value. Almost everything that happens around us (search queries, social networks) produces new data, many of which can be used for analysis.

      – Variety (variety): generated information is heterogeneous and can be presented in different formats: video, text, tables, numerical sequences, sensor readings.

      – Veracity (reliability): the quality of the data analysed. They must be reliable and valuable for analysis, so that they can be trusted. Low-fidelity data also contain a high percentage of meaningless information, which is called noise and has no value.

      Restrictions on the Big Data Implementation

      The main limitation is the quality of the raw data, critical thinking (what do we want to see? What pain? – This is done ontological models), the right selection of competencies. Well, and most importantly – people. Data-Scientists are engaged in work with the data. Additionally, there is one common joke: 90% of the data-scientists are data-satanists.

      Digital doppelgangers

      A digital double is a digital/virtual model of any object, system, process or person. In its conception, it accurately reproduces the shape and actions of the physical original and is synchronized with it. The error between the double and the real object must not exceed 5%.

      It must be understood that it is almost impossible to create an absolute digital counterpart, so it is important to determine which domain is rationally modelled.

      The concept of the digital counterpart was first described in 2002 by Michael Grieves, a professor at the University of Michigan. In the book «The Origin of Digital Doubles» he divided them into three main parts:

      1) physical product in real space;

      2) virtual product in virtual space;

      3) data and information that combine virtual and physical products.

      The digital double itself can be:

      – prototype – the analogue of the real object in the virtual world, which contains all the data for the production of the original;

      – a copy – a history of operation and data about all characteristics of the physical object, including the 3D model, the copy operates in parallel with the original;

      – an aggregated double – a combined system of a digital double and a real object that can be controlled and shared from a single information space.

      The development of artificial intelligence and the cheapening of the Internet of Things have made technology the most advanced. Digital doubles began to receive «clean» big data about the behaviour of real objects, it became possible to predict equipment failures long before accidents. Although the latter thesis is quite controversial, this direction is actively developing.

      As a result, the digital double is a synergy of 3D technologies, including augmented or virtual reality, artificial intelligence, the Internet of Things. It’s a synthesis of several technologies and basic sciences.

      The digital counterparts themselves can be divided into four levels.

      • The double of the individual assembly unit simulates the most critical assembly unit. It can be a specific bearing, motor brushes, stator winding or pump motor. In general, the one that has the greatest risk of failure.

      • The twin of the unit simulates the operation of the entire unit. For example, the gas turbine unit or the entire pump.

      • The production system double simulates several assets linked together: the production line or the entire plant.

      • Process counterpart – this is no longer about «hardware» but about process modelling. For example, when implementing MES- or APS-systems. We’ll talk about them in the next chapter.

      What problems can digital duplicate technology solve?

      • It becomes possible to reduce the number of changes and costs already at the stage of designing the equipment or plant, which allows to significantly reduce costs at the remaining stages of the life cycle. Additionally, it also avoids critical errors, which cannot be changed at the stage of operation.

      The sooner an error is detected, the cheaper it is to fix it

      In addition to cost increases, there is less room for error correction over time

      – By collecting, visualizing and analyzing data, it is possible to take preventive measures before serious accidents and damage to equipment.

      – Optimize maintenance costs while increasing overall reliability. The ability to predict failures allows to repair the equipment on the actual condition, and not on the «calendar». It is not necessary to keep a large amount of equipment in stock, that is, to freeze working capital.

      The use of DC in combination with big data and neural networks and the way from reporting and monitoring to predictive analysis and accident prevention systems

      Build the most efficient operating regimes and minimize production costs. The longer the accumulation of data and the deeper the analytics, the more efficient optimization will be.

      It is very important not to confuse the types of forecasting. Lately, working with the market of various IT solutions, I constantly see confusion in the concepts of predictive analytics and machine detection of anomalies in the operation of equipment. That is, using machine detection of deviations, they speak about the introduction of a new, predictive approach to the organization of service.

      On the one hand, both neural networks actually work. When machine detection of anomalies of the neuronet also finds deviations, which allows to perform maintenance to a serious failure and replace only worn-out element.

      However, let’s take a closer look at the definition of predictive analytics.

      A predictive (or predictive, predictive) analysis is a prediction based on historical data.

      So, it’s the ability to predict equipment failures before the abnormality happens. When the operational performance is still normal, but already begin to develop trends to deviation.

      If you go to a very domestic level, the detection of anomalies – it is when you have a change of pressure and you are warned about it before you have a headache or begin to have heart problems. And predictive analytics is when things are still normal, but you have changed your diet, your sleep quality or something, respectively, the processes in your body that will subsequently lead to an increase in pressure.

      As a result, the main difference is the depth of the dive, the availability of the skills and the horizon of prediction. Anomaly detection is a short-term prediction to avoid a crisis. To do this, you do not need to study historical data for a long period of time, СКАЧАТЬ