PhD Research Seminar: Transformers for sentiment analysis and natural language inference
First talk: Pre-Trained Transformers for Sentiment Analysis of Texts in Russian
Speaker: Sergey Smetanin, third-year PhD student, Department of Business Informatics
In recent years, transfer learning from pre-trained transformers has proven to be effective in a variety of natural language processing tasks, including sentiment analysis. However, training and fine-tuning of transformers are usually extremely resource-intensive tasks; so it tends to be challenging for researchers to select the best model among all available options. In this talk, we evaluate pre-trained transformers on six sentiment analysis datasets of Russian-language texts. Based on the evaluation results, we also present a constructed leaderboard using the aggregated classification score.
Second talk: Exploring Models' Knowledge of Language through NLI Task
Speaker: Mariya Tikhonova, third-year PhD student, Faculty of Computer Science
With the development of technologies for text processing and then deep learning methods for obtaining better text representation, language models, especially universal transformers, went through increasingly advanced stages of natural language modelling. An actively developing field of model interpretation develops testing procedures comparing their performance to a human level and exploring their natural language understanding (NLU) abilities. In our study, we concentrate on the natural language inference (NLI) task, which has been considered a valuable tool for the evaluation of NLU abilities of neural language models. We propose a methodology for exploring model quality and stability with respect to linguistic phenomena present in the text and carefully examine the linguistic features the model learns in process of fine-tuning. We also perform Multilingual BERT stability analysis in five languages on NLI task.