Even as robots have become more capable at solving manipulation and planning tasks, most solution strategies are limited to problems that are quite short or narrow in scope by human standards. Abstraction—the ability for an agent to reason at a higher level about salient aspects of a task—has long been a tool for roboticists to mitigate the challenges of imperfect perception and long-horizon reasoning associated with planning. Yet useful abstractions are difficult to come by in a world rife with complexity; for many real-world problems that require planning far into the future, it is often unclear how to abstract knowledge and perception without degrading performance or discarding information the agent may need to effectively complete its task.
Rather than hand-designing representations for state, action, and perceptual abstraction, there has been significant recent interest in learning part or all of these representations—e.g., via reinforcement learning, direct supervision, or informed via natural language—in an effort to mitigate the computational challenges of planning without compromising performance. Despite the enormous potential of this direction, robot planning has yet to have its ImageNet moment. In particular, it is as of yet unclear the role that learning will play in such a future: how much should be learned or hand-designed in this context?
This workshop is aimed at bringing together researchers at the intersection of planning and learning, both for perception and action, in an effort to systematize the discussion of long-horizon planning and the abstractions that enable it. In particular, participants will seek to better understand the challenges that limit progress in this space and the relative potential of different strategies to overcome these challenges. Moreover, we adopt an inclusive (rather than exclusive) definition of what constitutes long-horizon in an effort to welcome all researchers who seek to develop the capabilities necessary to “scale up” planning in their application domain. We welcome submissions including, but not limited to, the following topics:
- state and action abstraction learning for sequential decision-making
- hierarchical reinforcement learning
- embodied navigation (including vision and language-based navigation)
- planning in both fully and partially observed environments
- rearrangement and affordance learning
Some Motivating Questions
- How much should be learned or hand-designed?
- Should agents be provided a symbolic representation of the world in advance?
- If learning is to be used, how can we tractably train an agent when the goodness of that agent's actions may not be apparent for hundreds or thousands of steps? Where will the data come from, and in what form will it be?
- How can we standardize the metrics and benchmarks we use to evaluate performance in long-horizon planning domains?
All times local, New Zealand Daylight Time [UTC+13]
- 08:30–08:40 (10min) Introduction and welcome from the organizers
- 08:40–10:00 (1h20min) Speakers and Panel 1 (Emphasis: Perception)
- 08:40-08:52: Angel Chang (Simon Fraser University)
- 08:52-09:04: Lerrel Pinto (New York University)
- 09:04-09:16: Chad DeChant (Columbia University) [Blue Sky]
- 09:16-09:28: Subbarao Kambhampati (Arizona State University)
- 09:28-10:00: Speaker Panel (Moderator: Gregory J. Stein)
- 10:10–11:20 (1h20min) Speakers and Panel 2 (Emphasis: Abstraction)
- 10:10-10:22: George Konidaris (Brown University)
- 10:22-10:34: Ifrah Idrees (Brown University) [Blue Sky]
- 10:34-10:46: Jitendra Malik (UC Berkeley)
- 10:46-11:20: Speaker Panel (Moderator: Rohan Chitnis)
- 11:30–11:50 (20min) Spotlight Talks 1
- 11:30-11:35: "Generalizable Task Planning through Representation Pretraining" (Speaker: Chen Wang) [In Person]
- 11:35-11:40: "Policy-Guided Lazy Search with Feedback for Task and Motion Planning" (Speaker: Mohamed Khodeir) [Virtual]
- 11:40-11:45: "Planning with Spatial-Temporal Abstraction from Point Clouds for Deformable Object Manipulation" (Speaker: Carl Qi) [In Person]
- 11:45-11:50: "Comparison of Model Free and Model-Based Learning-Informed Planning for PointGoal Navigation" (Speaker: Yimeng Li) [Virtual]
- 11:50–12:30 (40+min) Poster Session 1
- 12:30–15:30 (3h) Break for Conference Events
- 15:30–16:50 (1h20min) Speakers and Panel 3 (Emphasis: Planning)
- 15:30-15:42: Roozbeh Mottagi (FAIR & University of Washington)
- 15:42-15:54: David Paulius (Brown University) [Blue Sky]
- 15:54-16:06: Kiana Ehsani (Allen Institute for Artificial Intelligence)
- 16:06-16:18: Dylan Hadfield-Menell (Massachusetts Institute of Technology)
- 16:18-16:50: Speaker Panel (Moderator: Tom Silver)
- 16:50–17:10 (20min) Spotlight Talks 2
- 16:50-16:55: "Planning Goals for Exploration" (Speaker: Edward Hu) [In Person]
- 16:55-17:00: "Guided Skill Learning and Abstraction for Long-Horizon Manipulation" (Speaker: Shuo Cheng) [Virtual]
- 17:00-17:05: "Learning Generalizable Task Policies for Embodied Agents in Domains with Rich Inter-object Interactions" (Speaker: Shreshth Tuli) [In Person]
- 17:05-17:10: "Sequence-Based Plan Feasibility Prediction for Efficient Task and Motion Planning" (Speaker: Zhutian Yang) [Virtual]
- 17:10-17:10 (1min) Best Paper Award
- 17:10–18:20 (1h) Poster Session 2 + Social
- 18:20–onwards Wrap Up + Social
Best Paper: "Policy-Guided Lazy Search with Feedback for Task and Motion Planning" by Mohamed Khodeir, Atharv Y Sonwane, Florian Shkurti.
Best Paper Finalists:
Congratulations to all the authors!