Summary of the paper

Title A Model to Generate Adaptive Multimodal Job Interviews with a Virtual Recruiter
Authors Zoraida Callejas, Brian Ravenet, Magalie Ochs and Catherine Pelachaud
Abstract This paper presents an adaptive model of multimodal social behavior for embodied conversational agents. The context of this research is the training of youngsters for job interviews in a serious game where the agent plays the role of a virtual recruiter. With the proposed model the agent is able to adapt its social behavior according to the anxiety level of the trainee and a predefined difficulty level of the game. This information is used to select the objective of the system (to challenge or comfort the user), which is achieved by selecting the complexity of the next question posed and the agent's verbal and non-verbal behavior. We have carried out a perceptive study that shows that the multimodal behavior of an agent implementing our model successfully conveys the expected social attitudes.
Topics Tools, Systems, Applications, Usability, User Satisfaction
Full paper A Model to Generate Adaptive Multimodal Job Interviews with a Virtual Recruiter
Bibtex @InProceedings{CALLEJAS14.689,
  author = {Zoraida Callejas and Brian Ravenet and Magalie Ochs and Catherine Pelachaud},
  title = {A Model to Generate Adaptive Multimodal Job Interviews with a Virtual Recruiter},
  booktitle = {Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)},
  year = {2014},
  month = {may},
  date = {26-31},
  address = {Reykjavik, Iceland},
  editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Hrafn Loftsson and Bente Maegaard and Joseph Mariani and Asuncion Moreno and Jan Odijk and Stelios Piperidis},
  publisher = {European Language Resources Association (ELRA)},
  isbn = {978-2-9517408-8-4},
  language = {english}
 }
Powered by ELDA © 2014 ELDA/ELRA