Cambridge AI & Machine learning enthusiast, 

We are resuming our Cambridge AI & Pizza talk series event this term, with the next event happening at 5:30 pm on Thursday 19th October 2023, where you can get your slice of the latest AI and machine learning research in Cambridge!

Join us at the auditorium, 21 Station Rd for an engaging evening featuring two 15-minute talks on cutting-edge research in AI and ML from both academia and industry (speaker details below). After the talks, we will be providing free pizza and refreshments following the talks. 

Stay tuned for more information, and we look forward to seeing you at the event!

Cheers,

Chao

 

Location: The Auditorium, 21 Station Rd 

Time: 17:30 – 18:00 (talks), 18:00 – 19:00 (pizza).

Speakers

17:30-17:45: Shreyas Padhy, University of Cambridge

Title: Stochastic Gradient Descent, a Scalable Algorithm for Gaussian Processes

Abstract: Gaussian Processes are some of the most powerful and performant techniques that exist for principled uncertainty estimation, but they have remained impossible to scale due to cubic scaling dependencies in the number of datapoints. In this talk, drawing on ideas from pathwise conditioning, we propose stochastic gradient algorithms that can scale posterior inference in GPs to millions of datapoints (2.5 million dimensional UCI datasets. We show interesting results through a spectral characterization of the implicit bias of stochastic gradient descent, and show that it produces predictive distributions close to the true posterior both in regions with sufficient data coverage, and in regions sufficiently far away from the data. Using our proposed approach, we obtain high-quality uncertainty estimates that can enable strong downstream applications such as Thompson sampling.

References: https://arxiv.org/abs/2306.11589 (Lin, Jihao Andreas, et al. “Sampling from Gaussian Process Posteriors using Stochastic Gradient Descent.” arXiv preprint arXiv:2306.11589 (2023).)

 

17:45-18:00

Speaker: Wenbo Gong, Microsoft Research Cambridge

Title: BayesDAG: Gradient-based posterior sampling for causal discovery. 

Abstract: The rapidly evolving field of causal machine learning is transforming the way we make data-driven decisions across a wide range of domains, including business engagement, medical treatment, and policy-making. A critical aspect of this process is the ability to infer the posterior distribution of causal models from observational data, which can help quantify the epistemic uncertainty and benefit downstream tasks. However, joint inference over Directed Acyclic Graphs (DAGs) space and nonlinear function parameters presents severe computational challenges. In this talk, we will discuss our approach, known as BayesDAG, a scalable posterior inference over DAGs using Stochastic Gradient Markov Chain Monte Carlo (SG-MCMC). This enables efficient and scalable Bayesian causal discovery by directly drawing posterior samples from the DAG space and parameter space.

References: https://arxiv.org/abs/2307.13917 (Annadani, Yashas, et al. “BayesDAG: Gradient-Based Posterior Sampling for Causal Discovery.” arXiv preprint arXiv:2307.13917 (2023).)