Bayesian Methods—Advanced Machine Learning

Преподаватель:

Ветров Дмитрий Петрович

Доцент Департамента больших данных и информационного поиска

 

The course addresses Bayesian methods for solving various machine learning and data processing problems (classification, dimension reduction, topic modeling, collaborative filtering, etc). Bayesian approach to probability theory allows one to take into account user’s preferences and task specific properties when building the model. Besides, it offers an efficient framework for model selection. We will cover the problems of automatic feature selection, determination of the number of components in probability mixtures, estimation of the dimension of latent subspace, setting the regularization coefficients in an efficient way, etc. We will review several simple models that can be used as building blocks for the construction of more complex probabilistic models. General tools for building the probabilistic models and for designing inference algorithms in those models are presented in the course. We will end up with the basics of the probabilistic graphical models which are further extension of Bayesian framework.

Intended Learning Outcomes

Upon completion of this course, the student will be able to:

  • Apply existing advanced Bayesian models of data processing; know their pro et contras
  • Build their own probabilistic models for the particular problem
  • Develop either exact or approximate inference algorithms for given probabilistic models
  • Formulate the domain-specific knowledge in terms of prior distributions
  • Read and discuss research papers on probabilistic framework in machine learning, computer vision, collaborative filtering, text processing, etc.