Dear Cambridge AI & Machine learning enthusiast,
We are looking to host our second AI & Pizza event at 5:30 pm on 5th July 2023, where you can get your slice of the latest AI and machine learning research in Cambridge!
Join us at the auditorium, 21 Station Rd for an engaging evening featuring two 15-minute talks on cutting-edge research in AI and ML from both academia and industry (speaker details below). After the talks, we will be providing free pizza and refreshments following the talks.
Stay tuned for more information, and we look forward to seeing you at the event!
P.S. We also welcome volunteer speakers from various backgrounds. Contact (firstname.lastname@example.org) if you are interested in giving a talk for our future events!
Location: The Auditorium, 21 Station Rd
Time: 17:30 – 18:00 (talks), 18:00 – 19:00 (pizza).
17:30 – 17:45: Austin Tripp, University of Cambridge
Title: Synthesizing molecules with Machine Learning
Abstract: Synthesizing novel molecules is a key task in chemistry and drug discovery, occupying a significant portion of medicinal chemists’ time. In this talk I present a summary of recent efforts to automate this process with machine learning, with a particular focus on reinforcement learning. I also highlight some issues with benchmark practices in this area which we have tried to address by releasing an open source library called “syntheseus”.
17:45 – 18:00: Divyat Mahajan, MILA & Université de Montréal/Microsoft Research Cambridge
Title: Interventional Causal Representation learning
Abstract: Causal representation learning seeks to extract high-level latent factors from low-level sensory data. Most existing methods rely on observational data and structural assumptions (e.g., conditional independence) to identify the latent factors. However, interventional data is prevalent across applications. Can interventional data facilitate causal representation learning? We explore this question in this paper. The key observation is that interventional data often carries geometric signatures of the latent factors’ support (i.e. what values each latent can possibly take). For example, when the latent factors are causally connected, interventions can break the dependency between the intervened latents’ support and their ancestors’. Leveraging this fact, we prove that the latent causal factors can be identified up to permutation and scaling given data from perfect do interventions. Moreover, we can achieve block affine identification, namely the estimated latent factors are only entangled with a few other latents if we have access to data from imperfect interventions. These results highlight the unique power of interventional data in causal representation learning; they can enable provable identification of latent factors without any assumptions about their distributions or dependency structure