Course Meeting Times
Lectures: 2 sessions / week, 1.5 hours / session
Course Description
We introduce and motivate the main theme of the course, setting the problem of learning from examples as the problem of approximating a multivariate function from sparse data. We present an overview of the theoretical part of the course and sketch the connection between classical Regularization Theory and its algorithms -- including Support Vector Machines -- and Learning Theory, the two cornerstones of the course. We mention theoretical developments during the last few months that provide a new perspective on the foundations of the theory. We briefly describe several different applications ranging from vision to computer graphics, to finance and neuroscience.
Prerequisites
18.02, 9.641, 6.893 or permission of instructor. In practice, a substantial level of mathematical maturity is necessary. Familiarity with probability and functional analysis will be very helpful. We try to keep the mathematical prerequisites to a minimum, but we will introduce complicated material at a fast pace.
Grading
There will be two problem sets, a MATLAB® assignment, and a final project. To receive credit, you must attend regularly, and put in effort on all problem sets and the project.
Problem Sets
See the assignments page for the two problem sets.
Projects
Some of the most promising projects:
Hypothesis testing with small sets
Connection between MED and regularization
Feature selection for SVMs theory and experiments
Bayes classification rule and SVMs
IOHMMs evaluation of HMMs for classification vs. direct classification
Reusing the test set datamining bounds
Large-scale nonlinear least square regularization
Viewbased classification
Local vs. global classifiers experiments and theory
RKHS invariance to measure historical math
Concentration experiments (dot product vs. square distance)
Decorrelating classifiers: experiments about generalization using a tree of stumps
Kernel synthesis and selection
Bayesian interpretation of regularization and in particular of SVMs
History of induction from Kant to Popper and current state
Bayesian Priorhood
Resources
The Center for Biological and Computational Learning (CBCL) at MIT was founded with the belief that learning is at the very core of the problem of intelligence, both biological and artificial, and is the gateway to understanding how the human brain works and to making intelligent machines. CBCL studies the problem of learning within a multidisciplinary approach. Its main goal is to nurture serious research on the mathematics, the engineering and the neuroscience of learning. CBCL is based in the Department of Brain and Cognitive Sciences at MIT and is associated with the McGovern Institute for Brain Research and with the Artificial Intelligence Laboratory at MIT.