Название: Handbook on Intelligent Healthcare Analytics
Автор: Группа авторов
Издательство: John Wiley & Sons Limited
Жанр: Техническая литература
isbn: 9781119792536
isbn:
ANN is constructed along identical lines except that node collections execute the location of neurons connected in the network, where a three-layer network is shown for ease. It has several layers defined as the input, hidden layers, and output of the neurons forming the interconnection network. The input neurons are the first information to deal with the problem, and the results and the solutions are in the output neurons. The hidden layer is an input and output layer network link. The diagram shows only one hidden layer, and we adhere for simplicity to one layer in this section, while there may be several such layers in some implementations.
The arrows in the image show the link between the neurons input n, the k hidden neurons, and the neurons in output m. Wisdom is seen as being fed on the left to right and is regarded as a feeding process. We undergo a back-breeding process in later portions. The way the network functions by its neurons has two major characteristics:
Neurons receive feedback from other neurons, however, the neuron also “flies” while the added neuron knowledge is of vital importance (a firing threshold). Information passing from one neuron to another is weighed by a variable that does not have a value affected by data within either neuron. The network is used to efficiently define alternatives to the issue by manipulating weighting variables.
1.9 Conclusion
One approach is to look for assistance using KBE approaches in solving the previous problem. KBE spans a wide range of engineering technologies and has tools that can capture and reuse the product and process knowledge to deliver individual users’ or MDO environment details and data. A good connection between rule-based reasoning, object-oriented modeling, and geometric modeling inside the KBE frame makes certain measures in the MDO process easy to grab and automate. As seen in this section, the MDO approach includes direct cooperation between some testing, optimization, and other modules and codes that work along with several variables of nature for the enhanced product. Capturing and reusing knowledge facilities will help to deliver sustainable parametric models by adapting the data to the various discipline data models and allow adjustments to flow across them. A major benefit from this capability allows the integration of homogeneous sets across a range of simulation tools, such that data and concept information can be transmitted smoothly from low- to high-trust research models as the design progresses over time. It helps to integrate internally and acquired technical capabilities into the MDO environment and helps them operate in parallel with the evolution of data and architecture expertise and the acquisition of increasingly complicated data structures combined with dynamic data comfort.
References
1. Chan, P.K.M., A New Methodology for the Development of Simulation Workflows. Moving Beyond MOKA, Master of Science thesis, TU Delft, Delft, 2013.
2. Cooper, D.J. and Smith, D.F., A Timely Knowledge-Based Engineering Platform for Collaborative Engineering and Multidisciplinary Optimization of Robust Affordable Systems. International Lisp Conference 2005, Stanford University, Stanford, 2005.
3. Cottrell, J.A., Hughes, T.J.R., Basilevs, Y., Isogeometric Analysis: Towards Integration of CAD and FEA, John Wiley & Sons Inc, Chichester, 2009.
4. Graham, P., ANSI Common Lisp, Englewood Cliffs, NJ, Prentice Hall, 107, 384–389, 1995.
5. La Rocca, G., Knowledge Based Engineering: Between AI and CAD. Review of a Language Based Technology to Support Engineering Design. Adv. Eng. Inform., 26, 2, 159–179, 2012.
6. Lovett, J., Ingram, A., Bancroft, C.N., Knowledge Based Engineering for SMEs: A Methodology. J. Mater. Process. Technol., 107, 384–389, 2000.
7. Mcgoey, P. J., A Hitch-hikers Guide to: Knowledge-Based Engineering in Aerospace (& other Industries). INCOSE Enchantment Chapter, 2011. Available at: http://www.incose.org/. 1, 117–121.
8. Milton, N., Knowledge Technologies, Polimetrica, Monza, 2008.
1 *Corresponding author: [email protected]
2 †Corresponding author: [email protected]
2
A Framework for Big Data Knowledge Engineering
Devi T.1* and Ramachandran A.2
1Department of Computer Science & Engineering, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Saveetha University, Chennai, India
2Department of Computer Science & Engineering, B.S. Abdur Rahman Crescent Institute of Science and Technology, Vandalur, Chennai, India
Abstract
Analytics and analysis from a massive database using various approaches and techniques are experimented, and ongoing research brings its main focus toward the domain such as big data. Economic growth and technological growth combined with its data production are also highlighted in big data approaches. Data are analyzed from social media, online stock markets, healthcare data, etc., which can be collaborated with artificial intelligence by developing the automated learning algorithms and development in cloud computing as well. Data can be either discrete or continuous, which are independent of the various processes for understanding the decision-making that relies on knowledge engineering. The proposed work converges in transforming the observed sequential data analysis from weather forecasting dataset. These systems can perform the cognitive task in improving the performance along with integrity of data using the enhanced framework. The prediction of natural disasters is a challenge for customers accessing forecast data, since fluctuations in data occur frequently, which fail to update the localization, that are identified as sensor latitude and longitude that are updated as a sequence on regular intervals from various directions. These four hidden states are the features that differentiate the probability of distributions for calculating the best cognitive tasks. Improved Bayesian Hidden Markov Frameworks (IBHMFs) have been proposed to identify the exact flow of state and detect the high congestion, which leads to earthquakes, tremor, etc. As the data from the analysis are unsupervised and features are converted to discrete and sequential data (independent variables), IBHMF can utilize in increasing the performance and produce the accuracy results in state estimation.
Keywords: Artificial intelligence, big data, Improved Bayesian Hidden Markov Frameworks (IBHMF), hidden state, knowledge engineering, weather forecasting
2.1 Introduction
Catastrophic damage has been caused by natural hazards along with loss in a socioeconomic way, thereby depicting the increase in trend. Several disasters pose challenges to officials working in the disaster management field. These challenges may include resources unavailability and limited workforce, and these limitations force them from changing the policies toward managing the disasters [1].
The amount of data generated is huge in size including the real along with the simulation data. These data can be used in supporting disaster management. The technological advancement like data generated from social media as well as remote sensing is huge in size and also is real data. In certain times, these real data are scarce and lead us to usage of simulation data. Several computational models can be used in generation of simulation data that can be used in estimation of impact produced СКАЧАТЬ