Arun Balajiee

PhD Student, Intelligent Systems Program, University of Pittsburgh

Robust And Transparent Ai In Search And Recommendation

16 Nov 2020 - Arun Balajiee

Talk Speaker: Paul Bennet

Talk Date: 2020-11-16

This talk was split into three parts – Customizing the AI model with little labeled data, building robust AI model and transparency of AI model. In the first section of the talk, Dr. Bennet talked about the scarcity of the data that can be labelled as “conversational” in the millions of queries that users make on Bing Search engine. The goal is to make the query suggestions made by the engine as conversational as possible using NLP and AI techniques. Using a Seq2Seq ranking technique and inductive weak supervision, Dr. Bennett explained the idea of building a model that can handle learning with less or no labelled training dataset. The usefulness of the final model was measured with a performance evaluation on actual user experience studies and showed great results. Next, he talked about building this model without depending on a large training set. For this, he mentioned a model that uses the idea of “Anchor texts” in a document, links pointing to other documents, filtering out documents with useful content using labelling and then training another model on these shortlisted documents. Both the model for filtering (classifier) and the model to rank the document for the results (neural ranker) were trained simultaneously using weak supervised reinforcement learning to produce good results.

The second portion of the talk was about building a robust AI model that can handle domain adaptation. For this he used the idea that is often discussed in the field of adversitising a concept known as “Spurious Correlations” using bayesian probabilities. This lead to publication of a model “Genie” that uses the Counterfactual simulation for click prediction using a causal transfer random forest.

The final portion of the talk was about building a transparent AI model. This involved the idea of giving the users the option to allow a recommender system provide information of what the users will be recommended with, if they “liked” or “disliked” a content through “hover” actions. Dr. Bennett talked about the success of mixed methods research design where qualitative data was used to understand fine-grained user preferences through in-lab study on 11 participants with semi-structured interviews and quantitative data that bolstered these behavioural signals during usage. More quantitative data was also obained using MTurk studies with between-subject design. The results showed a decrease in decision anxiety and some sense of control for the user when peeking into the outcome of their “like” or “dislike”. Hence, the conclusions from the study were to allow a user to preview their changes and when possible show what changes/has changed in the system. This gives a sense of increased user control, transparency and responsiveness to the UX. The future directions to this research are to understand if just a preview of changes would suffice the understanding of the total change of system to a user, predicting the impact of UX design and identifying UX that enhances implicit feedback signal and experience.

The interesting aspect discussed in the closing moments of the talk was the existence of an ethics board in large corporations just like IRB in universities, which are gatekeepers to user studies such as the ones conducted by Benett et al.

Overall, the talk touched upon interesting topics of reserach in AI and HCI and Bennett’s passion for research in an industrial setup