Название: Handbook on Intelligent Healthcare Analytics
Автор: Группа авторов
Издательство: John Wiley & Sons Limited
Жанр: Техническая литература
isbn: 9781119792536
isbn:
2.2 Big Data in Knowledge Engineering
Information plays a major part in big data rules; the technologies for intervention can accumulate the information according to the needs. There are many computations mathematically performed for streaming data that are used as per requirements. The Internet provides many resources to learn and a platform for online learners. In such cases, big data that has multiple source providing ways can develop applications according to customers’ expectations. There are many functions used for online learning techniques such as fragmented knowledge transfer that relies on big data models. Also, mining can extract the information based on demand that can create a framework that can be used according to the knowledge of engineering.
The software from IBM that was built with NN [13] optimized resources are also having certain rules to be achieved. They are applied to domains such as machine learning, computer vision, and NLP. Online learning not only used a fragmented knowledge method but also used the translation for various language learners. Those accessible translations are associated with libraries that support the online providers to reach the reader or learner in a fast or rapid manner. Mobile applications are also interconnected for knowledge services using big data learning procedures that can interpret the various users mode. Intelligent systems in AI such as robotics, automatic sensors, and latest technologies are all designed with the help of big data framework. There are four generations in knowledge engineering, which are the driving force of consuming data and massive volume of data access through the internet. To understand this generation, cognitive tasks are introduced, which have the sequential flow toward the customers’ expectations.
2.2.1 Cognitive Tasks for Time Series Sequential Data
Time series [14] analysis such as frequent changes in the market, sensor updates in space, and instant observation maintenance are some of the examples for sequential data. The research work focuses on time series [22, 23] sequential data such as weather forecasting to predict the accuracy of natural disaster through the changes in sequential information. Since the longitudinal and latitude values from GPS are not always the same and the feed forward network is used to analyze the information which can have multiple hidden states. We have the various features from the Kaggle dataset for identifying the current state without a hidden model using a Bayesian Markov chain model, which can access the columns such as entity and economic changes due to various disasters [2].
Information processing are of two types: i) behavioral processes and ii) cognitive processes that are mostly used in knowledge engineering [15, 16]. Big data streaming data are through online processing via social users from Instagram, Twitter, etc., as video files, audio files, text data, and so on. They are organized according to the need of sequential changes. Transition based on current state and its path as directed graph are the most important analysis for maintaining the network flow. Applications such as weather forecast for detecting the severity through the temperature changes, climate changes, and complex tasks to predict the climate that brings a disaster such as earthquake and volcanic difference [12].
2.2.2 Neural Network for Analyzing the Weather Forecasting
NN is the most important concept in the AI system that can control the weather forecast system such that the natural disaster can be reduced. This work proposed a novel approach to predict the hidden state that updates the new changes in current state that can be directed graph and true to its transition matrix that can differentiate the modeling level that have performance analysis based on the range of economic difference that can step into the neural approach that can be in cognitive task [15] when there is a combination of multilevel longitudinal data that are from various directions from north, south, east, and west that are frequently changed. IBHMF’s main task is to reduce the noise from the dataset, which takes particular columns to predict the performance rate and accuracy rate. Based on hidden NNs, the weather features can be variation, categorical data variables, etc., that are not stable instantly [13].
2.2.3 Improved Bayesian Hidden Markov Frameworks
Markov chain process is the type of technique to detect the chain represented in a random process. In this instance, the system can find the representations in disaster management dataset. In this Markov chain process, transition probabilities are described by the graph representations. The graph representations are the probability of moving the one state to another state. In state representation, the variable is declared as i and j, and the transitions are forward from i to j states.
In disaster management dataset, temperature, dry earthquake, and volcano are the attributes. The temperature attribute records the daily temperature in Celsius, which leads to disaster effects. The dry earthquake records the causes of earthquake and volcano records the hotness in lava and temperature in water vapor.
Hidden states are the layers, which are trained and tested in the model process. Hidden layers consist of weights and biases, which help in the training process. Weights help the input by multiplying and biases help to add the weighted input passed to the next hidden layer. These hidden layers are processed in forward and backward direction, which leads to reduce the error and loss rate and increase the accuracy of outcomes. In output layer, activation functions accessed the hidden layer output, which converts into binary format for the user understanding process.
For exponential time, executions are done by brute force solution. This brute force solution helps the model to efficiently approach exponential inputs. In this approach, sequences of observations of each state are executed using the Viterbi algorithm. The maximum probability of a sequence path of state and time are stored in observations of p. The observations of p are the property of Markov models. This process leads to the model in forward recursion.
Where,
Ot = probability, i = initial state, and j= hidden state of Markov.
In this model, Markov chain consists of four states; the transition probability between the temperature and volcano effects is 0.5; if it was high pressure today, then there is a 50% chance that it will cause disaster tomorrow. The invisible Markov chain processes each and every state that produces the random out for each transition. These transitions are stored in another variable of observations, which is named as m. These observations are visible to the user. There are four states of disaster management, and initial probabilities are transition probabilities of temperature and volcano probabilities matrix.
The parameter learning in the hidden Markov model is executed using the Baum-Welch algorithm. This Baum-Welch algorithm helps to find the probability of observations in local maxima. The following process shows the learning approach:
where p = probability, O = converge of optimum, and M = parameters.
These chain properties are applied in discrete directions that are applied in each step. Let us consider the sequence of generated variables as follows: