"Scientific understanding and vision-based technological development for continuous sign language recognition and translation"
Deaf communities revolve around sign languages as they are their natural means of communication. Although deaf, hard of hearing and hearing signers can communicate without boundaries amongst themselves, there is a serious challenge for the deaf community in trying to integrate into educational, social and work environments, as the vast majority of Europeans do not have signing skills. The overall goal of SignSpeak is to develop a new vision-based technology for translating continuous sign language to text, in order to provide new e-Services to the deaf community and to improve their communication with the hearing people, and the other way around. SignSpeak could be integrated with other technologies as shown in the next picture:
To this end, at the beginning of the project a new scientific study will be carried out to increase the linguistic understanding of sign languages; this new knowledge about the nature of sign language structure from the perspective of machine recognition of continuous sign language is crucial for a further development of sign-language-to-text technologies. The breakthrough in the understanding of sign language will allow a subsequent breakthrough in the development a new vision-based technology for continuous sign language recognition and translation to text. The SignSpeak system will track the dominant and non-dominant hand, as well as facial expressions and body posture, taking into account the signs performed before and after or, in other words, taking into account the context in which a sign has been realised. Additionally, thanks to the techniques implemented for feature extraction (image analysis) and sign recognition, the SignSpeak technology will be signer and ambient-independent. A conceptual scheme of the work planned within the project is presented below:
Therefore, SignSpeak combines innovative scientific theory and vision-based technology development by gathering within a common framework novel linguistic research and the most advanced techniques in image analysis, automatic speech recognition (ASR) and statistical machine translation (SMT). SignSpeak will be the first step to approach sign language recognition and translation to the levels already obtained in similar technologies like translating from text-to-speech and speech-to-text.
But the impact of SignSpeak’s results is much broader than the unique application in sign languages, because the results have also important applications in the industry for improving human-machine communication by gesture and for an automatic object and body part recognition and tracking in video streams.