Abstract: Within the framework of sequential prediction (online learning), data arrive in a stream, and the learner is tasked with making a sequence of decisions. Such a basic scenario has been studied in Information Theory, Decision Theory, Game Theory, Statistics, and Machine Learning. The learning protocol and the non-i.i.d. (or even adversarial) nature of observed data constitute a big departure from the well-studied setting of Statistical Learning Theory. In the latter, many important tools and complexity notions of the hypothesis class have been developed, starting with the pioneering work of Vapnik and Chervonenkis. In contrast, the theoretical understanding of online learning has been lacking, as most results are obtained on a case-by-case basis. In this talk, we first focus on no-regret online learning and develop the relevant notions of complexity in a surprising parallel to Statistical Learning Theory. We characterize online learnability through finiteness of the sequential versions of combinatorial dimensions, random averages, and covering numbers. This non-constructive study of inherent complexity is then augmented with a recipe for developing online learning algorithms via a notion of a relaxation. To demonstrate the utility of our approach, we develop a new family of randomized methods and new algorithms for the matrix completion problem. We then discuss extensions of our techniques beyond no-regret learning, including Blackwell approachability and calibration