PhD Research Seminar: Talks by students of the joint Master's-PhD programme

Мероприятие завершено

Where: https://zoom.us/j/442268392
When: June 7, 18:10–19:30 


First talk: Towards Integrating Centralized Multi-agent Path Finding with Decentralized Collision Avoidance   
Speaker: Stepan Dergachev, first-year Master's student, HSE Tikhonov Moscow Institute of Electronics and Mathematics

Avoiding collisions is the core problem in multi-agent navigation. In decentralized settings, when agents have limited communication and sensory capabilities, collisions are typically avoided in a reactive fashion, relying on local observations/communications. Prominent collision avoidance techniques, e.g., ORCA, are computationally efficient and scale well to a large number of agents. However, in numerous scenarios, involving navigation through the tight passages or confined spaces, deadlocks are likely to occur due to the egoistic behaviour of the agents and, as a result, the latter can not achieve their goals. To this end, we suggest an application of the locally confined multi-agent path finding (MAPF) solvers that coordinate sub-groups of the agents that appear to be in a deadlock (to detect the latter we suggest a simple, yet efficient ad-hoc routine). We present a way to correctly build a grid-based MAPF instance, typically required by modern MAPF solvers. We evaluate two of them in our experiments, namely, PUSH AND ROTATE and a bounded-suboptimal version of CONFLICT BASED SEARCH (ECBS), and show that their inclusion into the navigation pipeline significantly increases the success rate, from 15% to 99% in certain cases.


Second talk: Improving Transformers for Source Code Processing      
Speaker: Sergey Troshin, first-year Master's student, Faculty of Computer Science

There is an emerging interest in the application of natural language processing models to source code processing tasks. In contrast to natural language, source code is strictly structured, i.e., it follows the syntax of the programming language. Several recent works develop Transformer modifications for capturing syntactic information in source code. We conduct a thorough empirical study of the capabilities of Transformers to utilize syntactic information in different tasks. We show that Transformers are able to make meaningful predictions based purely on syntactic information and underline the best practices of taking the syntactic information into account for improving the performance of the model. Finally, we propose a simple, yet effective method, based on identifier anonymization, to handle out-of-vocabulary (OOV) identifiers. Our method can be treated as a preprocessing step and, therefore, allows for easy implementation. We show that the proposed OOV anonymization method significantly improves the performance of the Transformer in two code processing tasks: code completion and bug fixing.


Third talk: Variance Reduction in Monte Carlo Algorithms    
Speaker: Sofia Ivolgina, first-year Master's student, Faculty of Computer Science

Monte Carlo methods have become a very popular tool in many applications of statistics: machine learning, Bayesian methods, and other fields. Despite all their advantages, the issue of high variance for  prediction is not completely solved. We explore deep learning algorithms applied to importance sampling methods.