Summary of the paper

Title Learning Subjective Language: Feature Engineered vs. Deep Models
Authors Muhammad Abdul-Mageed
Abstract Treatment of subjective language is a vital component of a sentiment analysis system. However, detection of subjectivity (i.e., subjective vs. objective content) has attracted far less attention than sentiment recognition (i.s., positive vs. negative language). Particularly, online social context and the structural attributes of communication therein promise to help improve learning of subjective language. In this work, we describe successful models exploiting a rich and comprehensive feature set based on the structural and social context of the Twitter domain. In light of the recent successes of deep learning models, we also effectively experiment with deep gated recurrent neural networks (GRU) on the task. Our models exploiting structure and social context with an SVM achieve > 12% accuracy higher than a competitive baseline on a blind test set. Our GRU model yields even better performance, reaching 77:19 (i.e., ~ 14:50% higher than the baseline on the same test set, p < 0:001).
Full paper Learning Subjective Language: Feature Engineered vs. Deep Models
Bibtex @InProceedings{ABDUL-MAGEED18.8,
  author = {Muhammad Abdul-Mageed},
  title = {Learning Subjective Language: Feature Engineered vs. Deep Models},
  booktitle = {Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)},
  year = {2018},
  month = {may},
  date = {7-12},
  location = {Miyazaki, Japan},
  editor = {Hend Al-Khalifa and King Saud University and KSA Walid Magdy and University of Edinburgh and UK Kareem Darwish and Qatar Computing Research Institute and Qatar Tamer Elsayed and Qatar University and Qatar},
  publisher = {European Language Resources Association (ELRA)},
  address = {Paris, France},
  isbn = {979-10-95546-25-2},
  language = {english}
  }
Powered by ELDA © 2018 ELDA/ELRA