Learning, Perception, and Abstraction for Long-Horizon Planning

CoRL 2022 Workshop, Dec 15

Location: 260-115 F&PAA Auditorium (+ Virtual via Pheedloop)

Time: 08:30 - 18:20, New Zealand Daylight Time [UTC+13]


Even as robots have become more capable at solving manipulation and planning tasks, most solution strategies are limited to problems that are quite short or narrow in scope by human standards. Abstraction—the ability for an agent to reason at a higher level about salient aspects of a task—has long been a tool for roboticists to mitigate the challenges of imperfect perception and long-horizon reasoning associated with planning. Yet useful abstractions are difficult to come by in a world rife with complexity; for many real-world problems that require planning far into the future, it is often unclear how to abstract knowledge and perception without degrading performance or discarding information the agent may need to effectively complete its task.

Rather than hand-designing representations for state, action, and perceptual abstraction, there has been significant recent interest in learning part or all of these representations—e.g., via reinforcement learning, direct supervision, or informed via natural language—in an effort to mitigate the computational challenges of planning without compromising performance. Despite the enormous potential of this direction, robot planning has yet to have its ImageNet moment. In particular, it is as of yet unclear the role that learning will play in such a future: how much should be learned or hand-designed in this context?

This workshop is aimed at bringing together researchers at the intersection of planning and learning, both for perception and action, in an effort to systematize the discussion of long-horizon planning and the abstractions that enable it. In particular, participants will seek to better understand the challenges that limit progress in this space and the relative potential of different strategies to overcome these challenges. Moreover, we adopt an inclusive (rather than exclusive) definition of what constitutes long-horizon in an effort to welcome all researchers who seek to develop the capabilities necessary to “scale up” planning in their application domain. We welcome submissions including, but not limited to, the following topics:

Some Motivating Questions


All times local, New Zealand Daylight Time [UTC+13]

Invited Speakers

Angel Chang
Simon Fraser University
perception, language, embodied agent navigation
Jitendra Malik
UC Berkeley
perception, embodied agents
Roozbeh Mottaghi
FAIR & University of Washington
perception, embodied agents, rearrangements
Kiana Ehsani
Allen Institute for Artificial Intelligence
embodied agents, mobile manipulation
George Konidaris
Brown University
RL, hierarchies, skills to symbols
Subbarao Kambhampati
Arizona State University
planning, information extraction
Dylan Hadfield-Menell
Massachusetts Institute of Technology
learning for task and motion planning
Lerrel Pinto
New York University
RL, large-scale robot learning

Accepted Papers

  1. "Guided Skill Learning and Abstraction for Long-Horizon Manipulation" by Shuo Cheng, Danfei Xu.
  2. "Neural Contact Fields: Tracking Extrinsic Contact with Tactile Sensing" by Carolina Higuera, Siyuan Dong, Byron Boots, Mustafa Mukadam.
  3. "Planning Goals for Exploration" by Edward S. Hu, Richard Chang, Oleh Rybkin, Dinesh Jayaraman.
  4. "ExAug: Robot-Conditioned Navigation Policies via Geometric Experience Augmentation" by Noriaki Hirose, Dhruv Shah, Ajay Sridhar, Sergey Levine.
  5. "Generalizable Task Planning through Representation Pretraining" by Chen Wang, Danfei Xu, Li Fei-Fei.
  6. "Policy-Guided Lazy Search with Feedback for Task and Motion Planning" by Mohamed Khodeir, Atharv Y Sonwane, Florian Shkurti.
  7. "Planning with Spatial-Temporal Abstraction from Point Clouds for Deformable Object Manipulation" by Xingyu Lin, Carl Qi, Yunchu Zhang, Zhiao Huang, Katerina Fragkiadaki, Yunzhu Li, Chuang Gan, David Held.
  8. "PALMER: Perception-Action Loop with Memory for Long-Horizon Planning" by Onur Beker, Mohammad Mohammadi, Amir Zamir.
  9. "Towards Open-World Interactive Disambiguation for Robotic Grasping" by Yuchen Mo, Hanbo Zhang, Tao Kong.
  10. "PlanT: Explainable Planning Transformers via Object-Level Representations" by Katrin Renz, Kashyap Chitta, Otniel-Bogdan Mercea, A. Sophia Koepke, Zeynep Akata, Andreas Geiger.
  11. "[Blue Sky] Designing for Long Horizon Social Interactions" by Ifrah Idrees.
  12. "Learning Generalizable Task Policies for Embodied Agents in Domains with Rich Inter-object Interactions" by Mustafa Chasmai, Shreshth Tuli, Mausam, Rohan Paul.
  13. "On the Role of Structure in Manipulation Skill Learning" by Eric Rosen, Ben M Abbatematteo, Skye Thompson, Tuluhan Akbulut, George Konidaris.
  14. "[Blue Sky] Are robots stuck in time? Learning to represent actions in long horizon planning by learning to remember the past" by Chad DeChant.
  15. "Skill-based Model-based Reinforcement Learning" by Lucy Xiaoyang Shi, Joseph J Lim, Youngwoon Lee.
  16. "Learning to Guide Search in Long-Horizon Task and Motion Planning " by Christopher Powell Bradley, Nicholas Roy.
  17. "Sequence-Based Plan Feasibility Prediction for Efficient Task and Motion Planning" by Zhutian Yang, Caelan Reed Garrett, Dieter Fox.
  18. "[Blue Sky] Object-Level Planning and Abstraction" by David Paulius.
  19. "Comparison of Model Free and Model-Based Learning-Informed Planning for PointGoal Navigation" by Yimeng Li, Arnab Debnath, Gregory J. Stein, Jana Kosecka.

Best Paper Awards

Best Paper: "Policy-Guided Lazy Search with Feedback for Task and Motion Planning" by Mohamed Khodeir, Atharv Y Sonwane, Florian Shkurti.

Best Paper Finalists:

Congratulations to all the authors!


Gregory J. Stein
George Mason University
‘YZ’ Yezhou Yang
Arizona State University
Tom Silver
Massachusetts Institute of Technology
Jana Košecká
George Mason University