I am Ksenia, a research scientist at DeepMind in the machine learning group of Nando de Freitas. In my research I am interested in generalisation and adaptation to multiple tasks in offline reinforcement learning. These questions inspire me because they bring a promise to solve many real-world problems.
I completed my PhD at EPFL's CVLab in January 2019, where I was supervised by Prof. Pascal Fua and Prof. Raphael Sznitman. In autumn 2017 during my internship at Google Research in Zurich I had a chance to work with Vittorio Ferrari and Jasper Uijlings.
I obtained my M.Sc. degree in Algorithms and Machine Learning from University of Helsinki. During that time, I also worked as a research assistant in the CoSCo group at HIIT. Before, I studied in Russia at the Higher School of Economics in the faculty of Business Informatics and Applied Mathematics.
At DeepMind I work on offline reinforcement learning and reward learning. In particular, I am interested in generalisation and adaptation to various tasks in multi-task, few-shot and zero-shot learning. Often, my research is motivated by the challenges in data-driven robotics. During my PhD I worked on active learning for different classification tasks with more focus on vision applications. I developed a meta-learning approach to active learning, where a strategy is learnt from the previously encountered problems.
December, 2020: We presented two articles about reward learning in offline reinforcement learning: Semi-supervised reward learning for offline reinforcement learning (video) and Offline Learning from Demonstrations and Unlabeled Experience (video) at offline RL workshop at NeurIPS.
June, 2020: Our article Scaling data-driven robotics with reward sketching and batch reinforcement learning was presented at RSS. For more details, check our website, video and dataset.
April, 2019: I started a new job as a Research Scientist at DeepMind!