Title |
Annotation of Human Gesture using 3D Skeleton Controls |
Authors |
Quan Nguyen and Michael Kipp |
Abstract |
The manual transcription of human gesture behavior from video for linguistic analysis is a work-intensive process that results in a rather coarse description of the original motion. We present a novel approach for transcribing gestural movements: by overlaying an articulated 3D skeleton onto the video frame(s) the human coder can replicate original motions on a pose-by-pose basis by manipulating the skeleton. Our tool is integrated in the ANVIL tool so that both symbolic interval data and 3D pose data can be entered in a single tool. Our method allows a relatively quick annotation of human poses which has been validated in a user study. The resulting data are precise enough to create animations that match the original speaker's motion which can be validated with a realtime viewer. The tool can be applied for a variety of research topics in the areas of conversational analysis, gesture studies and intelligent virtual agents. |
Topics |
Corpus (creation, annotation, etc.), Discourse annotation, representation and processing, Tools, systems, applications |
Full paper |
Annotation of Human Gesture using 3D Skeleton Controls |
Slides |
- |
Bibtex |
@InProceedings{NGUYEN10.952,
author = {Quan Nguyen and Michael Kipp}, title = {Annotation of Human Gesture using 3D Skeleton Controls}, booktitle = {Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)}, year = {2010}, month = {may}, date = {19-21}, address = {Valletta, Malta}, editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis and Mike Rosner and Daniel Tapias}, publisher = {European Language Resources Association (ELRA)}, isbn = {2-9517408-6-7}, language = {english} } |