Arun Balajiee

PhD Student, Intelligent Systems Program, University of Pittsburgh

Understanding Indirect Answers to Polar Questions

19 Oct 2020 - Arun Balajiee

Talk Speaker: Annie Louis

Talk Date: 10/19/2020

In this talk, Dr. Louis presented the four steps that they followed in trying to build a model that can attempt at trying to classify indirect answers to the questions. The talk was about her recent publication. Prior Work has shown that indirect answers have multiple possible interpretations like politeness as shown in this work, here and using prosodic cues.

Essentially the idea is to be able to annotate questions that don’t have direct answers – such as “Do you like sports movies?”. These questions in some cases could be answered with a yes or no answer but many times could be answered indirectly or probably over-answered (answered with too much information). The task is to be able to annotate the questions and their answers into a dataset that can be used to train models in be able to predict answers as response to these questions. Dr. Louis talked about the 4 steps – collect the questions on MTurk for a shortlist of topics, take help from 100 annotators to annotate the questions and their answers into 6 class labels (yes,no,probably yes,probably no and in the middle), eliciting the answers to these questions and finally marking intepretations to the answers as well as the questions (how will x interpret y’s answer). In some cases this labelling was hard as for questions such as “Do you stay up late?” – which have an objective answer but could be intepreted as “late” or “early” depending on the person who is being answered to. From this annotated dataset called “Circa” has 34,268 Question- Answer pairs with 5 judgements each. In most cases, the annotators for the dataset showed reasonable agreement.

Finally, after running an MNLI there was only a considerable improvement in the prediciton performance as well on using a BERT-model with upto 84% accuracy when including both questions and answers for model training to predict the tone of the repsonses to unseen question answer pairs. In conclusion, Dr. Louis noted that there are potential improvements that possible in the annotated dataset starting with interpretation matches and combining results and more fine-grained annotations of the dataset

This talk was highly relevant to my own research idea of clustering a dataset of questions into different topics. I found some of these ideas are things I could borrow, arguing that they are the state-of-art!