Deutsch (German)


The Interdisciplinary Research Project

Gesture Recognition with SensorGloves

- A short overview -


The Interdisciplinary Research Project (IFP) "Gesture Recognition with SensorGloves" was launched in August 1994 and is funded by the Technical University of Berlin.

Three university departments are involved:

Research is done on sensor-based recognition of human gesture codes with particular attention to gestures produced with one's hands and arms. For this, hand movements have to be measured as accurately and completely as possible: The hands' position and orientation in 3-space, their position and orientation with respect to the human body, finger flexion and bending as well as the pressure distribution on the palms during grasping.

Measurements are conducted with several different sensors: An ultrasonic ranging system developed as part of a diploma thesis at the Real-Time Systems and Robotics Research Group measures the hands' absolute spatial position and orientation; in addition, finger flexion and grasp pressure distribution are captured by the TUB-SensorGlove, which was developed as part of a student's pre-diploma thesis at the same institute.

The patented SensorGlove was exhibited at the Hanover Industrial fair in 1993. Its variety of sensors and their high accuracy render it superior to many commercially available systems. An improved prototype was presented at the Hanover CeBIT'95. Twelve position sensors fastened on the glove's back measure the user's finger flexion with a resolution of approximately 1 to 1/3 of a degree. Twelve pressure sensors on the glove's palm measure forces occurring during object grasping.

Sensors currently used on the project will be supplemented by new ones which are being developed by the microsensorics research group. During the first phase of the project, the group will concentrate on developing acceleration sensors for the glove. Accelerometers have a much higher resolution measuring fast movements than ultrasonics. The glove's path in 3-space can be reconstructed mathematically if simultaneous acceleration measurements are made for all three dimensions. This calls for the development of new micromechanical devices, as up to now, sensors capable of triaxial, on-chip acceleration measurements are not available.

It is very important to have a close working relationship between the microsensorics and the computer science research group, as sensor specifications, signal processing issues and sensor characteristics have to be thoroughly discussed. Only this ensures the developed sensors' correct and successful operation on the glove.

The semiotics research group will evaluate various gestural codes concerning their suitability for recognition with the SensorGlove. Mainly two types of gestures will be reviewed: Common, "everyday" gestures and specialist gestures. An important part of the work will be the compilation of a dictionary of Berlin emblems. This research is modelled on the examples of Ekman, Johnson, Sparhawk etc. Furthermore, some small repertoires of specialist gestures developed in or for working environments will be added. Examples are gestures for the control of cranes and other machinery utilized on building sites and "studio gestures" used by a radio producer to communicate with the announcer in his sound-proof radio cabin.

The semiotics research group systematically records gestural codes, after which they are transcribed and compiled into a dictionary for further use - such as their simulation and recognition based on the SensorGlove as input device. Gesture data is processed with the aid of modern video equipment including computer assisted image analysis. The collected material is made available to the project partners in the form of a CD-ROM image database.

The computer science research group concentrates on gesture recognition and on the further development of the TUB-SensorGlove. For the latter, the micosensorics group's newly developed accelerometers have to be integrated on the glove and suitable interfaces to existing hardware must be created. Then, the new prototype must be tested and calibrated. Only thereafter it is possible to obtain reliable data for gesture recognition applications.

Gesture data can be analyzed with many different pattern recognition methods and algorithms. Among other things, classical statistical methods, neural networks, genetic algorithms and fuzzy methods will be evaluated for gesture recognition. Partly, existing methods may be adapted to the new problem, partly, completely new methods must be developed. As data analysis is very time-consuming, a fast workstation is essential for real-time gesture recognition (in the project, a DEC Alpha is used).

Gesture material for automatic recognition is collected in close cooperation with the semiotics group: It is important to find a suitable language or notation for the symbolic representation of gestures in a computer with which both video image transcription and further symbolic gesture processing is possible. In a first step, gestures that are to be automatically recognized by the computer later on are selected by examining the semiotics group's material. Afterwards, test persons can enter the selected material via a SensorGlove into the computer in the framework of a specific application.

There are many applications for sensor-based gesture recognition: From navigation commands (catchword "cyberspace"), applications in medicine and industry (eg. precise telecontrol of surgical robots, telecontrol of robots and machinery in outer space or at other locations too dangerous for humans to access), right up to enabling applications such as communication enhancements for the deaf-mute (inter-communication between themselves and communication with hearing persons) - the list of examples is endless.

In the course of the project, a complete gesture recognition system will be built, consisting - amongst others - of modules for gesture input, gesture preprocessing and -analysis as well as of an integrated gesture database (containing multiple gesture dictionaries), graphics display routines for gesture data, and control modules for a robot and other devices. For demonstration purposes, one of the project's goals is the control of a robot and of a computer-simulated crane with simple gesture commands.

The complete automatic recognition of human sign languages is a long-term research goal, small parts of which we hope to achieve in this project. If feasible, it would allow the deaf-mute to communicate with their environment in a much simpler and more natural way. For example, a "gesture telephone" could then be realized transmitting gesture data (captured by two SensorGloves and some other devices) via an ordinary telephone line to a computer displaying the data on its screen - either as written text or as a graphical image of moving body limbs (the videophone is not a viable alternative, as its transmission bandwith and -rate are as yet far too low for the highly detailed and fast hand movements of a signing person). Instead of an optical display one could also imagine a direct outlet for speech, thus rendering the computer a translator between the hearing and the deaf-mute.


Contact:

Technische Universität Berlin
Institut für Technische Informatik
Sekretariat EN10
Einsteinufer 17
D-10587 Berlin

Last change: Friday, 03-Nov-1995

Back to the homepage of Real-Time Systems (Prozessdatenverarbeitung, PDV)