: Autonomous exploration and data-efficient learning are important ingredients for helping machine learning handle the complexity and variety of real-world interactions. In this talk, I will describe methods that provide these ingredients and serve as building blocks for enabling self-sufficient robot learning.

First, I will outline a family of methods that facilitate active global exploration. Specifically, they enable ultra data-efficient Bayesian optimization in reality by leveraging experience from simulation to shape the space of decisions. In robotics, these methods enable success with a budget of only 10-20 real robot trials for a range of tasks: bipedal and hexapod walking, task-oriented grasping, and nonprehensile manipulation.

Next, I will describe how to bring simulations closer to reality. This is especially important for scenarios with highly deformable objects, where simulation parameters influence the dynamics in unintuitive ways. The success here hinges on either finding effective representations for the state of deformables or leveraging differentiable simulation and rendering for direct optimization.

Finally, I will share the vision of how to combine efficient representations and policy structures to obtain adaptable mobile manipulation that succeeds not only for rigid, but also for articulated and deformable objects. For this, our recent work on generalizing equivariant representations can offer instant generalization to changes in object poses and scales. To create a compelling demonstration for these algorithmic advances, I will share ideas for now to employ them for solving everyday household tasks, leveraging a prototype of our TidyBot system and integrating with large vision-language models.