the MAD Seminar

The MaD seminar features leading specialists at the interface of Applied Mathematics, Statistics and Machine Learning.

Room: Auditorium Hall 150, Center for Data Science, NYU, 60 5th ave.

Time: 2:00pm-3:00pm

Subscribe to the Seminar Mailing list here. If you experience any problem please contact one of the organizers (Joan Bruna jb4496@nyu.edu, Yanjun Han yh5107@nyu.edu, Qi Lei ql518@nyu.edu).

Schedule with Confirmed Speakers

Date Speaker Title
September 12 Joel A. Tropp (Caltech) Randomly Pivoted Cholesky
September 19 Cyril Letrouit (Orsay) Stability of optimal transport: old and new

Abstracts

Joel A. Tropp: Randomly pivoted Cholesky

André-Louis Cholesky entered École Polytechnique as a student in 1895. Before 1910, during his work as a surveyer for the French army, Cholesky invented a technique for solving positive-definite systems of linear equations. Cholesky’s method can also be used to approximate a positive-semidefinite (psd) matrix using a small number of columns, called “pivots”. A longstanding question is how to choose the pivot columns to achieve the best possible approximation.

This talk describes a simple but powerful randomized procedure for adaptively picking the pivot columns. This algorithm, randomly pivoted Cholesky (RPC), provably achieves near-optimal approximation guarantees. Moreover, in experiments, RPC matches or improves on the performance of alternative algorithms for low-rank psd approximation.

Cholesky died in 1918 from wounds suffered in battle. In 1924, Cholesky’s colleague, Commandant Benoit, published his manuscript. One century later, a modern adaptation of Cholesky’s method still yields state-of-the-art performance for problems in scientific machine learning.

Joint work with Yifan Chen, Ethan Epperly, and Rob Webber. Available at arXiv:2207.06503.

Cyril Letrouit: Stability of optimal transport: old and new

Optimal transport consists in sending a given source probability measure to a given target probability measure, in a way which is optimal with respect to some cost. On bounded subsets of R^d, if the cost is given by the squared Euclidean distance and the source measure is absolutely continuous, a unique optimal transport map exists.

The question we will discuss is the following: how does this optimal transport map change if we perturb the target measure? For instance, if instead of the target measure we only have access to samples of it, how much does the optimal transport map change? This question, motivated by numerical aspects of optimal transport, has started to receive partial answers only recently, under quite restrictive assumptions on the source measure. We will review these answers and show how to handle much more general cases.

This is a joint work with Quentin Mérigot.


Archive

Schedule Spring 2024

Schedule Fall 2023

Schedule Spring 2023

Schedule Fall 2022

Schedule Spring 2022

Schedule Fall 2021

Schedule Spring 2020

Schedule Fall 2019

Schedule Spring 2019

Schedule Fall 2018

Schedule Spring 2018

Schedule Fall 2017

Schedule Spring 2017