1 / 1

SiGML Signing System for Virtual Human Communication

SiGML is a notation developed to support ViSiCAST and eSIGN projects using virtual humans for sign language communication. SiGMLSigning generates avatar animations from SiGML sequences, incorporating HamNoSys phonetic model to represent any sign language. Animgen converts SiGML to animation frames for avatar skeleton configuration. The system interfaces with avatars like VGuido-Mask2 and Avatar Research Platform to drive synthetic animation. Developed at UEA, SiGMLSigning offers a flexible solution for signing animation in various sign languages.

dorit
Download Presentation

SiGML Signing System for Virtual Human Communication

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SiGML Notation & SiGMLSigning System R Elliott, JRW Glauert, JR Kennaway School of Computing Sciences SiGML, Signing Gesture Mark-up Language, is the notation developed at UEA over the past three years to support the work of the EU-funded ViSiCAST and eSIGN projects. These projects are both concerned with the use of virtual humans to communicate with deaf people in their preferred medium, that is, in sign language.The SiGML notation allows sign language sequences to be defined in a form suitable for performance by a virtual human, or avatar, on a computer screen. SiGMLSigning is the software system we have developed to generate signing avatar animation from signing sequences defined in SiGML. SiGML and HamNoSys SiGML is a form of Extensible Mark-up Language, XML – a simple but flexible format for the exchange of structured and semi-structured data. XML is represented as plain text; hence it is easily transported over the Internet and World-Wide Web. The most important technical influence on the SiGML definition is HamNoSys, the Hamburg Notation System – a well-established transcription system for sign languages, developed by our partners at the Institute for German Sign Language and Deaf Communication at the University of Hamburg. Each sign language is an authentic language in its own right, with its own distinctive grammatical characteristics: a sign language is not an alternative form of some spoken language. Where sign language does differ from spoken language is at the phonetic level: sign language is articulated primarily by the hands, but also using other parts of the signer’s anatomy, especially the head and face. The SiGML notation incorporates the HamNoSys phonetic model, and hence SiGML can represent signing expressed in any sign language. SiGMLSigning SiGMLSigning is a flexible software system that we have developed to provide synthetic animation of signing sequences defined in SiGML. SiGMLSigning implements the processing pipeline shown schematically above left. At the heart of this process is Animgen, the “synthetic animation engine”: this converts SiGML to a sequence of animation “frames”, to be displayed at 25fps, each corresponding to a configuration of the avatar's virtual skeleton. The SiGMLSigning architecture defines interfaces allowing any suitable avatar to be driven in this way. The eSIGN project uses the VGuido-Mask2 avatar, developed by our partners at Televirtual Ltd. (above centre). To support our research into synthetic virtual human animation we have developed our own avatar animation system, the Avatar Research Platform, ARP, (above right). References: For further information about the eSign project: http://www.visicast.cmp.uea.ac.uk/eSIGN/. R. Elliott, J.R.W. Glauert, J.R. Kennaway, and I. Marshall. Development of language processing support for the visicast project. In ASSETS 2000 4th International ACM SIGCAPH Conference on Assistive Technologies, Washington DC, USA, 2000. J.R.W. Glauert. Visicast: Sign language using virtual humans. In International Conference on Assistive Technology ICAT 2002, pages 21–33, Derby, 2002. BCS. Richard Kennaway. Synthetic animation of deaf signing gestures. In 4th International Workshop on Gesture and Sign LanguageBased Human-Computer Interaction, LNAI, pages 146–157. Springer-Verlag, 2001. Signing Text Input Signing Animation Generation Signing Avatar Animation SiGML (HamNoSys) Signing Units Frame Data School of Computing Sciences, University of East Anglia, Norwich, NR4 7TJ, UK. Tel: +44 1603 592847; Fax: +44 1603 593345; http://www.cmp.uea.ac.uk/

More Related