Sergey Nikolenko

Sergey Nikolenko

Main page

Books
Research papers
Talks and posters
Students
Popular science
Other stuff

   Research
CS and crypto
Bioinformatics
Machine learning
Algebraic geometry
Algebra
Bayesian networks
Earth sciences

   Teaching
 2014
ML, KFU
Game Theory, HSE
Mech. Design, HSE
ML, CSClub Kazan
Game theory, HSE
Math. logic, AU
Machine learning, STC
Machine learning, AU
 2013
Discrete math, HSE
Machine learning, STC
Math. logic, AU
Cryptography, AU
 2012
Machine learning, STC
Math. logic, AU
Machine learning II, AU
Machine learning, AU
Machine learning, EMC
 2011
Cryptography, AU
Math. logic, AU
Machine learning, AU
 2010
Math. logic, AU
Machine learning, AU
Cryptography, AU
 2009
Crypto in CS Club
Statistics
Machine learning, AU
Cryptography
 2008
Speech recognition
MD for CS Club
ML for CS Club
Mechanism design
 2007
Machine Learning
Probabilistic learning

  External links
Google Scholar profile
DBLP profile
LiveJournal account
userinfonikolenko (in Russian)

Teaching activities

Machine Learning at the Kazan Federal University, 2014

This is a semester-long machine learning course presented at the Kazan Federal University with financial aid from the Dynasty Foundation; see also the course page at the CSClub website.

The course itself (all slides and lecture notes are in Russian):

1. Introduction. History of AI. Probability theory basics. Bayes' theorem and maximal a posteriori hypotheses.
Slides (.pdf, 304kb)
2. Probability distributions. Bernoulli trials. Maximum likelihood, ML estimates for Bernoulli trials and multinomial distribution. Prior distributions, conjugate priors. Beta distribution as a conjugate prior for Bernoulli trials. Predictive distribution: Laplace's rule. Dirichlet distribution as a conjugate prior for multinomial distributions.
3. Gaussian distribution. Maximum likelihood estimates for the Gaussian; why the ML estimate for variance is biased. Multidimensional Gaussian. Conditional and marginal Gaussians.
Slides for lectures 2-3 (.pdf, 741kb)
4. Least squares regression. Least squares as an ML estimate for Gaussian noise.
Slides (.pdf, 329kb)
5. Overfitting. Regularization. Ridge regression and lasso regression. Predictive distribution for linear regression. Classification: 1-of-K representation, linear decision functions. Fischer's linear discriminant.
Slides (.pdf, 1902kb)
6. Bayes theorem for classification. LDA and QDA. Logistic regression.
Slides (.pdf, 1290kb)
7. Statistical decision theory. Regression function, optimal Bayesian classifier. Nearest neighbors. Curse of dimensionality. Bias-variance-noise decomposition.
Slides (.pdf, 545kb)
8. Reinforcement learning: multiarmed bandits. Greedy policies, exploration vs. exploitation. Confidence intervals. Minimizing regret: UCB1.
Slides (.pdf, 265kb)
9. Reinforcement learning: Markov decision processes. On-policy and off-policy learning. TD-learning. Machine learning in games (backgammon, chess, go).
Slides (.pdf, 686kb)
10. Clustering. Hierarchical clustering, graph-based clustering. The EM algorithm. EM in general, minorization-maximization, why EM improves the likelihood. EM for clustering.
Slides (.pdf, 805kb)
11. Hidden Markov models. Baum-Welch algorithm. Applications of hidden Markov models to speech recognition.
Slides (.pdf, 292kb)
12. Probabilistic graphical models: basic idea, factorizations, d-separation. Directed and undirected models. Factor graphs.
Slides (.pdf, 930kb)
13. Inference on factor graphs. Belief propagation with the message passing algorithm.
Slides (.pdf, 820kb)
14. Case study: Bayesian rating systems. Bradley–Terry models. Expectation Propagation, TrueSkill, and its extensions.
Slides (.pdf, 2398kb)
15. Approximate inference in PGMs. Loopy belief propagation. Variational approximations (idea).
16. Sampling and approximate inference with sampling. Markov chain Monte Carlo methods.
Slides (.pdf, 658kb)
17. Case study: text mining. Naive Bayes. Latent Dirichlet allocation and its extensions.
Slides (.pdf, 826kb)
18. Support vector machines. Kernel trick for SVMs.
Slides (.pdf, 553kb)
19. Case study: recommender systems. Nearest neighbors: user-based and item-based. Locality sensitive hashing.
20. Case study: recommender systems. SVD extensions. Additional information in recommender systems. Course review.
Slides for lectures 19-20 (.pdf, 1100kb)

Selected references.

  1. Christopher M. Bishop. Pattern Recognition and Machine Learning. Springer, Information Science and Statistics series, 2006.
  2. Kevin Murphy. Machine Learning: A Probabilistic Perspective, MIT Press, 2012.
  3. David J. C. MacKay. Information Theory, Inference, and Learning Algorithms. Cambridge University Press, 2003.