Machine Learning for Healthcare Applications. Группа авторов
Чтение книги онлайн.

Читать онлайн книгу Machine Learning for Healthcare Applications - Группа авторов страница 24

Название: Machine Learning for Healthcare Applications

Автор: Группа авторов

Издательство: John Wiley & Sons Limited

Жанр: Программы

Серия:

isbn: 9781119792598

isbn:

СКАЧАТЬ of its predecessor’s units. Training was done using roughly 10% of the dataset divided into a train set, validation set and a test set. After setting a dropout of 0.2 for input layer & 0.5 for hidden layers, the model recognized arousal & valence with rates of 73.06% (73.14%), 60.7% (62.33%), and 46.69% (45.32%) for 2, 3, and 5 classes, respectively. The kernel-based classifier was observed to have better accuracy compared to other methods like Naïve Bayes and SVM. The result was a set of 2,184 unique features describing EEG activity during each trial. These extracted features were used to train a DNN classifier & random forest classifier. This was exclusively successful for BCI where datasets are huge.

      Emotion monitoring using LVQ and EEG is used to identify emotions for the purpose of medical therapy and rehabilitation purposes. [3] proposes a monitoring system for humane emotions in real time through wavelet and “Learning Vector Quantization”. Training data from 10 trials with 10 subjects, “3 classes and 16 segments (equal to 480 sets of data)” is processed within 10 seconds and classified into 4 frequency bands. These bands then become input for LVQ and sort into excited, relaxed or sad emotions. The alpha waves appear frequently when people are relaxed, beta wave occurs when people think, theta wave occurs when people are under stress, tired or sleepy and delta wave occurs when people are in deep sleep. EEG data is captured using an Emotive Insight wireless EEG on 10 participants. They used wireless EEG electrodes on “AF3”, “T7”, “T8” and “AF4” with a 128 Hz sampling frequency to record at morning, noon and night. 1,280 points are recorded in a set, which occurs every 3 min segmented every 10 s. Each participant is analyzed with excited, relaxed or sad states. Using the “LVQ wavelet transform”, EEG was extracted into the required frequencies. “Discrete wavelet transforms (DWT)” again X(n) signal is described as follows:

image

      Known as wavelet base function. Approximation signal below is a resulted signal generated from convoluted processes of original signal mapping with high pass filter.

image

      Where, x(n) = original signal

       g(n) = low pass filter coeff

       h(n) = high pass filter coeff

       K, n = index 1 = till length of signal

       Scale function coefficient (Low pass filter)g0 = 1 − 342, g1 = 3 − 342, g2 = 3 + 3, 342, g3 = 1+342

       Wavelet function coefficient (High pass filter)h0 = 1 − 342, h1 = − 3 − 342, h2 = 3 + 3, 342, h3 = − 1 + 342

      When each input data with class label is known, a supervised version of vector quantization called “Learning Vector quantization” can be used to obtain the class that depends on the Euclidean distance between reference vectors and weights. Each training data’s class was compared based on:

image

      Following is the series of input identification systems:

image

      “As stated, The LVS algorithm attempted to correct winning weight Wi with minimum D by shifting the input by the following values:

      1 If the input xi and wining wi have the same class label, then move them closer together by ΔWi(j) = B(j)(Xij − Wij).

      2 If the input xi and wining wi have a different class label, then move them apart together by ΔWi(j) = −B(j)(Xij −Wij).

      3 Voronoi vectors/weights wj corresponding to other input regions are left unchanged with Δwi (t) = 0.”

      Here 6 different emotional states such as sorrow, fear, happiness, frustration, satisfaction and enjoyment can be classified using different methods by extracting features from EEG signals. Decent accuracy was achieved by extracting appropriate features for emotional states such as discrete wavelet transforms and ANN recognizer system. In Ref. [4] this model, valence emotions range from negative to positive whereas arousal emotions go from calm to excitement. Discrete Wavelength transforms were applied on brain signal to classify different feature sets. The models used here is the 2-dimensional Arousal-Valence model. We invoked stimulus in the participant’s neural signals using IAPS datasets. The IAPS Dataset contains 956 images with each of them projecting all emotional states. The participants of IAPS rated every picture for valence and arousal. 18 electrodes of a 21-electrode headset are used with 10–20 standard system and a sampling rate of 128 Hz. Since every subject’s emotion are different, a self-assessment manikin (SAM) using the 2-dimensional space (arousal/valence) model where each of them having 5 levels of intensity was taken by the subjects needed to rate his or her emotion. The test was attended by 5 participants between the ages of 25 and 32. Each participant was given a stimulus of 5 s since the duration of each emotion is about 0.5 to 4 s.

      To do this the data is derived from 4 frequency bands—alpha, beta, theta, delta. ECG (heart) artefacts which are about 1.2 Hz, “EOG” artefacts (Blinking) is below 4 Hz and EMG (Muscle) artefacts about 30 Hz and Non-Physiological artefacts power lines which is above 50 Hz which removed in preprocessing. In DWT all frequency bands are used and for each trial, the feature vector is 18 ∗ 3 ∗ 9 ∗ 4 = 1,944 (18 electrodes, 3 statistical features, 9 temporal windows & 4 frequency bands). In our instance, an “artificial neural network” has been used as a form of classifier of “backpropagation” algorithm for learning models implemented on the network. The architecture consists of 6 outputs and 10 hidden layers for all the different states of emotion. The accuracies “10-fold cross-validation technique” was used to avoid overfitting while estimating accuracies for the classifiers. As user’s emotion can be affected by many factors such as their emotional state during the experiment, the best achieved accuracy for the network was 55.58%.

      They applied Support Vector Machine to explore the bond between neural signals elated in prefrontal cortex based on taste in music. They [5] explored the effects of music on mental illnesses like dementia. It was observed that music enabled listeners to regulate negative behaviors and thoughts occurring in the mind. A BCI-based music system is able to analyze the real time activities of neurons and possible to provide physiological information to the therapist for understanding their emotions. The methods used to evaluate the data depended on the subjects.

СКАЧАТЬ