Learning to Match Distributions:
Optimal Transport, Flow Matching & Applications

Fall 2026 · New York University
Romain Lopez
Time & location TBD

This seminar course explores how to match and transform probability distributions for machine learning applications. The course begins with a few lectures on optimal transport and flow matching, and transitions into a reading group covering contemporary methods and applications across scientific domains.

Prerequisites & requirements. The class is open to graduate students with a background in machine learning who have taken a course in probability and statistics at the level of Fernandez-Granda (2024) or equivalent. Familiarity with convex optimization is helpful but not required. Students will present and review papers throughout the semester, and complete a semester-long research project.

Part I — Mathematical Foundations

Tentative list of topics covered:

Comparing distributions: divergences, MMD & kernel methods. f-divergences, kernel mean embeddings, maximum mean discrepancy, two-sample testing.
Optimal transport: computing couplings. Monge and Kantorovich formulations, Wasserstein distances, duality, entropic regularization, Sinkhorn algorithm.
Flow matching: continuous morphing of distributions. Continuity equation, conditional flow matching, probability paths, connections to score-based diffusion.

Part II — Methods (Reading Group)

We will set the syllabus as we go. Below are some topics we may discuss.

Part III — Applications (Reading Group)

We will set the syllabus as we go. Below are some topics we may discuss.