Dear Cambridge AI & Machine Learning Enthusiast,

After a series of successful and engaging AI & Pizza events last year, we’re back to bring you more insightful talks discussions and delicious pizza this year! We’re happy to restart our AI & Pizza talk series at 5:30 pm on Thursday, March 14th, 2024. Save your date for the latest advancements in AI and machine learning, right here in Cambridge.

P.S., We are constantly looking for speakers from related fields. Please do reach out to us (; ) if you are interested!

Location: The Auditorium, 21 Station Rd

Date: Thursday, March 14th, 2024


5:30 pm – 6:00 pm: Talks

6:00 pm – 7:00 pm: Networking, Pizza, and Refreshments

Speaker: Tycho van der Ouderaa (Imperial College London/University of Oxford)

Title: The LLM Surgeon

5:30 pm – 5:45 pm

Abstract: State-of-the-art language models are becoming increasingly large in an effort to achieve the highest performance on large corpora of available textual data. However, the sheer size of the Transformer architectures makes it difficult to deploy models within computational, environmental or device-specific constraints. We explore data-driven compression of existing pretrained models as an alternative to training smaller models from scratch. To do so, we scale Kronecker-factored curvature approximations of the target loss landscape to large language models. In doing so, we can compute both the dynamic allocation of structures that can be removed as well as updates of remaining weights that account for the removal. We provide a general framework for unstructured, semi-structured and structured pruning and improve upon weight updates to capture more correlations between weights, while remaining computationally efficient. Experimentally, our method can prune rows and columns from a range of OPT models and Llamav2-7B by 20%-30%, with a negligible loss in performance, and achieve state-of-the-art results in unstructured and semi-structured pruning of large language models.

Speaker: Meyer Scetbon (Microsoft Research)

Title:  Low-Rank Structures in Optimal Transport

5:45 pm – 6:00 pm

Abstract:  Optimal transport (OT) plays an increasingly important role in machine learning (ML) to compare probability distributions. Yet, it poses, in its original form, several challenges when used for applied problems: (i) computing OT between discrete distributions amounts to solving a large and expensive network flow problem which requires a supercubic complexity in the number of points; (ii) estimating OT using sampled measures is doomed by the curse of dimensionality. These issues can be mitigated using an entropic regularization, solved with the Sinkhorn algorithm, which improves on both statistical and computational aspects. While much faster, entropic OT still requires a quadratic complexity with respect to the number of points and therefore remains prohibitive for large-scale problems. In this talk, I will present new regularization approaches for the OT problem, as well as its quadratic extension, the Gromov-Wasserstein (GW) problem, which impose low-rank structures on the admissible couplings. This results in the development of new algorithms that enjoy a linear complexity both in time and memory with respect to the number of points, enabling their applications in the large-scale setting where millions of points need to be compared. Additionally, I will show that these new regularization schemes have better statistical performances compared to the entropic approach, that they naturally interpolate between the Maximum Mean Discrepancy (MMD) and OT, and that they offer general clustering methods for arbitrary geometry.