EEG Signal Processing and Machine Learning. Saeid Sanei
Чтение книги онлайн.

Читать онлайн книгу EEG Signal Processing and Machine Learning - Saeid Sanei страница 54

Название: EEG Signal Processing and Machine Learning

Автор: Saeid Sanei

Издательство: John Wiley & Sons Limited

Жанр: Программы

Серия:

isbn: 9781119386933

isbn:

СКАЧАТЬ as a function of the squared error (often called a performance index) such as η (e(n)2), such that it monotonically decreases after each iteration and converges to a global minimum. This requires:

      (4.97)equation

      (4.99)equation

      Using the least mean square (LMS) approach, ∇w (η(w)) is replaced by an instantaneous gradient of the squared error signal, i.e.:

      (4.100)equation

      Therefore, the LMS‐based update equation is

      (4.101)equation

      Also, the convergence parameter, μ, must be positive and should satisfy the following:

      (4.102)equation

      where λ max represents the maximum eigenvalue of the autocorrelation matrix R . The LMS algorithm is the most simple and computationally efficient algorithm. However, the speed of convergence can be slow especially for correlated signals. The recursive least‐squares (RLS) algorithm attempts to provide a high speed stable filter, but it is numerically unstable for real‐time applications [40, 41]. Defining the performance index as:

      Then, by taking the derivative with respect to w we obtain

      (4.105)equation

      where

      (4.106)equation

      and

      (4.107)equation

      From this equation:

      (4.108)equation

      The RLS algorithm performs the above operation recursively such that P and R are estimated at the current time n as:

      (4.109)equation

      (4.110)equation

      (4.111)equation

      where M represents the finite impulse response (FIR) filter order. Conversely:

      (4.112)equation

      which can be simplified using the matrix inversion lemma [42]:

      (4.113)equation

      and finally, the update equation can be written as:

      where

      (4.115)equation

      and the error e(n) after each iteration is recalculated as:

      (4.116)equation

      All suboptimal transforms such as the DFT and DCT decompose the signals into a set of coefficients, which do not necessarily represent the constituent components of the signals. Moreover, the transform kernel is independent of the data hence they are not efficient in terms of both decorrelation of the samples and energy compaction. Therefore, separation of the signal and noise components is generally not achievable using these suboptimal transforms.

      Expansion of the data into a set of orthogonal components certainly achieves maximum decorrelation of the signals. This enables separation of the data into the signal and noise subspaces.

Schematic illustration of the general application of PCA.

      (4.117)СКАЧАТЬ