Summary of the paper

Title Human-Robot Dialogue and Collaboration in Search and Navigation
Authors Claire Bonial, Stephanie Lukin, Ashley Foots, Cassidy Henry, Matthew Marge, Kimberly Pollard, Ron Artstein, David Traum and Clare Voss
Abstract Collaboration with a remotely located robot in tasks such as disaster relief and search and rescue can be facilitated by grounding natural language task instructions into actions executable by the robot in its current physical context. In order to gain understanding of the translation an instruction undergoes starting from verbal human intent, to understanding and processing, and ultimately, to robot execution, we use a Wizard-of-Oz methodology to elicit data in which a participant speaks freely to instruct a robot on what to do and where to move through a remote environment to accomplish collaborative search and navigation tasks. This data offers the potential for exploring and evaluating action models by connecting natural language instructions to execution by a physical robot (controlled by a human wizard). In this paper, a description of the corpus (soon to be openly available) and examples of actions in the dialogue are provided.
Topics Multiparty Dialogue, Human-Robot Interaction, Dialogue Structure Annotation
Full paper Human-Robot Dialogue and Collaboration in Search and Navigation
Bibtex @InProceedings{BONIAL18.4,
  author = {Claire Bonial ,Stephanie Lukin ,Ashley Foots ,Cassidy Henry ,Matthew Marge ,Kimberly Pollard ,Ron Artstein ,David Traum and Clare Voss},
  title = {Human-Robot Dialogue and Collaboration in Search and Navigation},
  booktitle = {Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)},
  year = {2018},
  month = {may},
  date = {7-12},
  location = {Miyazaki, Japan},
  editor = {James Pustejovsky and Ielka van der Sluis},
  publisher = {European Language Resources Association (ELRA)},
  address = {Paris, France},
  isbn = {979-10-95546-06-1 },
  language = {english}
Powered by ELDA © 2018 ELDA/ELRA