A smart virtual glove for the hand telerehabilitation

https://doi.org/10.1016/j.compbiomed.2006.09.011Get rights and content

Abstract

Hand rehabilitation, following stroke or hand surgery, is repetitive and long duration and can be facilitated with the assistance of complex, heavy and cumbersome haptic gloves based on sensors. The present paper describes a virtual glove, software based, which tracks hand movements by using images collected from webcams and numerical analysis. Finger forces are calculated from the deformations impressed to some objects (of known elastic coefficient) grasped by the patient hand. The presented system is notable for simplicity, generality and low cost. Implementation and results of the proposed virtual glove will be the objects of a future paper.

Introduction

Post stroke patients and people suffering from orthopaedic hand diseases, in particular after surgery, often show residual hand impairments and need effective rehabilitation therapy. Recent studies have suggested that for upper extremity functional recovery, repetitive and long duration training using robots and virtual reality is helpful [1], [2]. Moreover, it has been demonstrated that training results have to be continuously controlled by a therapist to be effective.

Rehabilitation is broadly classified into two categories: physical rehabilitation (PR) and functional rehabilitation (FR). PR exercises imply the use of force to improve the patient's motor skills (exercise muscles and joints). FR is performed to regain lost skills (such as those needed in the activities of daily living or job-related skills). PR exercises, therefore, have much greater diversity and their output depends on exercise design. The essential feature of these exercises is the patient's interaction with the objects he grasps. Each therapy exercise has several levels of difficulty corresponding to the maximum force that can be applied, the time allowed, and other parameters.

Traditional rehabilitation is done one-to-one, namely, one therapist (or sometimes several) working with one patient. Thus costs are high, especially for demanding patients such as those with brain or post surgery injuries. For the parts of the therapy that the patient does at home, there is currently no monitoring. Thus, errors can occur because of missing data or in interpreting evaluated data because no online database can be used. Such errors are both positional and temporal, since the temporal granularity of manual data recording is low. Repetitive training can be facilitated with the assistance of mechanical devices integrated with an augmented reality system. Several of these integrated systems (haptic gloves) have been designed for virtual reality simulation: the Rutgers-II ND Hand Master haptic device [3], the CyberGrasp glove (Immersion Inc.) [4], the LRP Glove [5] or integrated versions of them [6]. A general description of the architecture is reported in Fig. 1. In particular, it describes the mechanical units involved and how they interact together while being monitored by a dedicated PC which communicates with a host PC in which the results of the exercises are stored to be accessed and processed remotely. Although there are undoubted advantages in the use of such rehabilitation auxiliary systems, problems with their utilization greatly restrict their adoption. In fact, they can be very expensive due to their intrinsic complexity, especially of the mechanical and haptic devices. Moreover, these devices have several force feedback terminals per finger with the forces grounded in the palm or on the back of the hand which makes them heavy (some hundreds of grams), cumbersome and greatly limit hand movements. This can also reduce the possibility of using them away from the home or clinical environment. Furthermore, each device has to be constructed specifically for each patient, taking into consideration the nature and extent of the patient's residual infirmities. This seriously limits re-using both the mechanical device and the associated control system. The patient can tend to rely too much on the device rather than using his own hand when he becomes familiar with the supporting device. These devices can be very difficult to use because they require frequent calibration. We describe a method which, based on image analysis and pattern reconstruction, provides a solution to these limitations; a virtual glove can be used instead of a haptic one. The idea is to track hand and finger movements by using a set of video cameras opportunely positioned on the vertices of a cube and then use imaging processing techniques. In fact, it is possible to monitor the movements of each finger and of the whole hand and to attribute to these movements the degree of recovery with respect to an ideal exercise. The forces applied by the patient's hand or fingers can be measured indirectly from deformations impressed on some elastic objects of known elastic coefficients (rubber balls of various dimensions and robustness, peg boards, etc.) grasped by the patient. The deformation of the object can be recorded and measured by the pattern recognition system and it is thus possible to calculate the force induced by each finger from the depth of the depressions produced. These forces canalso be deduced from hand or finger movements when an object is grasped. In what follows, a schematic description of the “virtual glove” components and functioning is presented and a description of its integration with existing virtual reality systems will also be given. Implementation and results of the proposed rehabilitation system will be discussed in a future paper.

Section snippets

Descriptions of components

The virtual glove is based on hand posture detection and tracking. Recently, an increasing number of computer vision projects dealing with the detection and tracking of objects in motion, especially human body classification, have been proposed and developed [7], [8], [9], [10], [11]. In particular, some models extract 3D voxels from multiple video cameras [8], [9], [10]. Some of these methods can be very cumbersome for difficult models, such as the human body and also for a single hand. In

Operative description

The video collected by the four webcams are sent to a computer and elaborated by six software modules (see Fig. 2) including the software tool described in the previous section. In particular, the first module, named classifier, serves to collect and classify the initial position, shape and size of the analysed hand, without the use of a grasped object. In this module, a numerical model of the real hand, comprising initial infirmities, is realized and a set of a priori information is stored for

Conclusion and future research

A completely software based hand telerehabilitation system (the virtual glove), which uses webcams for visual tracking movements and calculating forces, has been designed.

The presented architecture does not depend upon complicated mechanical haptic gloves, their associated hardware (compressors, transducers, etc.) and the software necessary to drive these hardware/interface units.

The space occupied by the global assembly is greatly reduced, and the weight and costs are also much less. The

Summary

Impairment of hand functions is prevalent among stroke survivors and people suffering from orthopaedic diseases, in particular after hand surgery, motivating the search for effective rehabilitation therapy. For physical and functional recovery, repetitive and long duration training using virtual reality is helpful: training results have to be continuously controlled by a therapist to be effective. Repetitive training can be facilitated with the assistance of mechanical devices integrated with

Acknowledgement

The author wish to thank Carmelita Marinelli for the help in figures preparation.

Giuseppe Placidi was born in L’Aquila, Italy, on November 1, 1967. He received his “laurea” degree in 1991 in Computer Science at the University of L’Aquila (Supervisor Prof. Alberto Marchetti Spaccamela). During his University course, he received a student fellowship in computational mathematics (September 1990) from the National Research Council (C.N.R.) of Rome. In 1996 he received his Ph.D. degree in Medical Imaging at the University of L’Aquila (Supervisor Prof. Antonello Sotgiu). From

References (16)

There are more references available in the full text version of this article.

Cited by (0)

Giuseppe Placidi was born in L’Aquila, Italy, on November 1, 1967. He received his “laurea” degree in 1991 in Computer Science at the University of L’Aquila (Supervisor Prof. Alberto Marchetti Spaccamela). During his University course, he received a student fellowship in computational mathematics (September 1990) from the National Research Council (C.N.R.) of Rome. In 1996 he received his Ph.D. degree in Medical Imaging at the University of L’Aquila (Supervisor Prof. Antonello Sotgiu). From 1996 to 1997 he was a research contractor for I.N.F.M. (National Institute for Physics of Matter) From 1998 to 1999 he was a postdoctoral fellow in the Department of Electrical Engineering of the University of L’Aquila. From September 1999 to February 2005 he was a research associate in the field of Medical Imaging and Tomography in the Department of Science and Biomedical Technology of the University of L’Aquila. Since March 2005 he has been an assistant professor in Computer Science in the same Department.

He is author and co-author of 38 papers published on International Scientific Journals and Books; 48 Conferences Proceedings with referees, at 20 of which he made oral presentations; and five Patents.

His research topics are: Medical tomography: image reconstruction, filtering and deconvolution algorithms, numerical algorithms for magnetic fields optimization, acquisition sequences for functional MRI, applications of Functional MRI, lossless image compression, artificial intelligence in medicine, and information theory.

View full text