Название: Human Motion Capture and Identification for Assistive Systems Design in Rehabilitation
Автор: Pubudu N. Pathirana
Издательство: John Wiley & Sons Limited
Жанр: Физика
isbn: 9781119515210
isbn:
(1.4)
where the distance from the anatomic joint to the linkage joint is
(1.5)
Figure 1.10 Pictures of animals. Source: Durfee et al. [102]. © 2009, ASME.
Apart from the above three examples, in some studies where motion trajectories of joints are captured, angle information is still derived for encoding human movements. For example, Adams et al. [15] developed a virtual reality system to assess the motor function of upper extremities in daily living. To encode the movement, they used the swing angle of the shoulder joint along the Y and Z axes, the twist angle of the shoulder, the angle of the elbow, their first and second derivatives, the bone length of the collarbone, upper arm and forearm, as well as the pose (position, yaw and pitch) of the vector along the collarbone to describe the movement of the upper body. Here the collarbone is a virtual bone connecting two shoulders. These parameters were utilised in an unscented Kalman filter as state, while the positions of the shoulders, elbows and wrists reading from a Kinect formed the observation. Another example is that of Wenbing et al. [378], who evaluated the feasibility of using a single Kinect with a series of rules to assess the quality of movements in rehabilitation. Five movements, including hip abduction, bowling, sit to stand, can turn and toe touch, were studied in this paper. For the first four movements, angles were used as encoders. For instance, the change of angle between left and right thighs (the vector from the hip centre to the left and right knee) was used to represent the angle of hip abduction, while the dot product of two vectors (from the hip centre to the left and right shoulders) was utilised to compute the angle encoding the movement of bowling. Additionally, Olesh et al. [267] proposed an automated approach to assess the impairment of upper limb movements caused by stroke. To encode the movement of the upper extremities, the angle of four joints, including shoulder flexion‐extension, shoulder abduction‐adduction, elbow flexion‐extension and wrist flexion‐extension, were calculated with the 3D positions of joints measured with Kinect.
Though angles of joints, as well as their derivatives, are utilised widely in encoding human motions, trajectories of joints and their derivatives can also be observed in some rehabilitation and telerehabilitation applications.
The first example is that of Chang et al. [67], who developed a programme to use Kinect as the motion capture device for spinal cord injury (SCI) rehabilitation. In this programme, the trajectories of the hand, elbow and shoulder were recorded to represent the external rotation of upper extremities. Similarly, Su [339] also developed a rehabilitation system, named KHRD, to provide home‐based rehabilitation services. To represent human motions, two key features were used, including trajectories of joints, as well as their speed. Additionally, Cordella et al. [78] modified the Kinect into a marker‐based device to measure the positions of joints on a hand (refer to Figure 1.11). Markers with a dimension of 1.2 cm were attached to the joints of fingers, as well as the wrist. After detecting the centre of these markers, a robust tracking scheme was developed to track the position of each marker. Thus the movement of a hand was encoded by the trajectories of each joint on the hand, as well as the trajectory of the wrist.
1.5.3 Summary and challenge
From the literature, it is found that encoders used in human motion recognition are similar to those in physical telerehabilitation in many studies. For instance, features like trajectories, velocity, acceleration, angle, angular velocity and angular acceleration are most commonly used in both fields. Though patients with movement disorders usually have a limited range of motions, they may be required to do certain tasks so as to evaluate their ability to perform ADLs, which usually are composed of a series of simple movements.
Figure 1.11 Marker‐based hand tacking system. Source: Cordella et al. [78].
As a result, there remain challenges in developing formal descriptions and robust computational procedures for the automatic interpretation and representation of motions of patients. The majority of studies [92, 158] employed a variety of human motion encoders to recognise or decompose general movement, such as reaching, waving hands, jumping, walking and so on. Few of them investigated details in each general movement, for example, the even smaller atomic components included in these general movements that are of importance for syntactic and structural descriptions of human movements in detail, especially in a clinic and rehabilitation environment, where the details of movements of body parts require a form of motion language or, at least, syntax. A novel approach to encode human motion trajectories will be discussed in Chapter 3.
1.6 Patients' Performance Evaluation
In recent decades, with the advancements in telerehabilitation and associated motion capture technologies, an increasing number of research and development activities are focusing on the development of automated quantitative measures of patient performance in ADLs [136, 262]. Due to the important role played by the upper extremity in ADLs [99], an automated approach for measuring and assessing the ability of upper extremities to perform certain tasks is vital for telerehabilitation systems to deliver their full potential.
1.6.1 Questionnaire‐based assessment scales
In the past few decades, a number of approaches have been proposed for assessing upper extremities, the majority of which are questionnaire‐based. For musculoskeletal movement disorders of the extremities, most scales are generic. For instance, the self‐reported Musculoskeletal Function Assessment (MFA) instrument [226], Short Musculoskeletal Function Assessment (SMFA) questionnaire [344] and self‐administered СКАЧАТЬ