Moving Obstacle Avoidance

Motion planning consists of finding a valid, collision-free path for a robot from a start configuration to a goal configuration. Planning in an dynamic environment is complicated by the need for constant adjustments of plans to account for moving obstacles. Yet, it is critical in real-world applications such as flight coordination, assistive robots and autonomous vehicles. In these dynamic environments, the precise future position of obstacles is often unobtainable due to either robot sensor noise or stochasticity of obstacles dynamics (such as pedestrians). It is therefore important and an active area of research to produce collision-free trajectories in the presence of these uncertainties in real-time. We focus on developing real-time motion planners with high navigation success rates for environments with a large number (up to 900) of stochastically moving obstacles and robot sensor uncertainty. We developed methods that works even when moving obstacles have strongly-interacting stochastic dynamics or completely unknown dynamics.

Strongly-Interacting obstacles with known stochastic dynamics, imperfect robot sensors

We have utilized Monte Carlo simulations to predict the position of stochastically moving obstacles in order to better inform tree-based motion planning methods with the method called Stochastic Ensemble Simulation (SES) based planning. This simulation can be done online [IROS 16, video below] for predictions of obstacles with strongly-interacting stochastic dynamics, or offline [IROS 15, video], for non-interacting obstacles. Both methods have higher success rates (up to 40% higher than comparison methods) in environments with 50 strongly-interacting obstacles and imperfect robot sensors.

Unknown obstacle dynamics

When obstacle dynamics are unknown, the navigation problem can be even more difficult. We have tackled this problem through the use of reinforcement learning where the learned goal is to arrive at the goal while maximizing distance from obstacles. This can be seen as a preference balancing task where manual derivation of optimal robot motions for these opposing preferences is difficult. PrEference Appraisal Reinforcement Learning (PEARL) [ICRA 16, video below] automatically learns near optimal motions, which solves task with opposing preferences for acceleration controlled robots. PEARL projects the high-dimensional continuous robot state space to a low dimensional preference feature space resulting in efficient and adaptable planning. PEARL can be used for dynamic obstacle avoidance robotic tasks, where an agent must navigate to the goal without collision with moving obstacles. The agent is trained with 4 static obstacles, while the trained agent avoids up to 900 moving obstacles with complex hybrid stochastic obstacle dynamics. Our results indicate PEARL has comparable success rates with state of the art methods that can require manual tuning.

Publications & Papers

  • Yazied Hasan, Arpit Garg, Satomi Sugaya, Mohammad R. Yousefi, Aleksandra Faust, Lydia Tapia, "Defensive Escort Teams for Navigation in Large Crowds via Multi-Agent Deep Reinforcement Learning", In IEEE Robotics and Automation Letters, pp.5645-5652, 2020.(pdf, video)

  • Arpit Garg, Hao-Tien Chiang, Satomi Sugaya, Lydia Tapia, "Comparison of Deep Reinforcement Learning Policies to Formal Methods for Moving Obstacle Avoidance", In Proceedings of the IEEE International Conference on Intelligent Robots and Systems (IROS), pp. 3534-3541, Macau, China, Nov. 2019.(pdf)

  • Avanika Mahajan, Lama A. Youssef, Cédric Cleyrat, Rachel Grattan, Shayna R. Lucero, Christopher P. Mattison, M. Frank Erasmus, Bruna Jacobson, Lydia Tapia, William S. Hlavacek and Mark Schuyler, "Allergen Valency, Dose, and FcεRI Occupancy Set Thresholds for Secretory Responses to Pen a 1 and Motivate Design of Hypoallergens", In The Journal of Immunology, 198(3) pp.1034-1046, Feb. 2017. (pdf)

  • Hao-Tien Chiang, Nathanael Rackley, Lydia Tapia, "Runtime SES Planning: Online Motion Planning in Environments with Stochastic Dynamics and Uncertainty", In Proceedings of the IEEE International Conference on Intelligent Robots and Systems (IROS), pp. 4802-4809, Deajon, South Korea, Oct. 2016. (pdf, Bibtex)

  • Aleksandra Faust, Hao-Tien Chiang, Nathanael Rackley, Lydia Tapia, "Avoiding Moving Obstacles with Stochastic Hybrid Dynamics using PEARL:PrEference Appraisal Reinforcement Learning", In Proc. of IEEE International Conference on Robotics and Automation (ICRA), pp. 484-490, Stockholm, Sweeden, May 2016. (pdf, Bibtex)

  • Hao-Tien Chiang, Nathanael Rackley, Lydia Tapia, "Stochastic Ensemble Simulation Motion Planning in Stochastic Dynamic Environments", In International Conference on Intelligent Robots and Systems (IROS), pp 2347-2354, Hamburg, Germany, Oct. 2015. (pdf, Bibtex)

  • Hao-Tien Chiang, Nick Malone, Kendra Lesser, Meeko Oishi, Lydia Tapia, "Path-Guided Artificial Potential Fields with Stochastic Reachable Sets for Motion Planning in Highly Dynamic Environments", In International Conference on Robotics and Automation (ICRA), pp. 2347-2354, Seattle, WA, U.S.A., May 2015. (pdf, Bibtex)

  • Hao-Tien Chiang, Nick Malone, Kendra Lesser, Meeko Oishi, Lydia Tapia, "Aggressive Moving Obstacle Avoidance Using a Stochastic Reachable Set Based Potential Field", In International Workshop on the Algorithmic Foundations of Robotics (WAFR), Istanbul, Turkey, 3-5 Aug. 2014. (pdf, Bibtex)

  • Nick Malone, Kendra Lesser, Meeko Oishi, Lydia Tapia, "Stochastic Reachability Based Motion Planning for Multiple Moving Obstacle Avoidance", In Proc. International Conference on Hybrid Systems: Computation and Control (HSCC), pp. 51-60, Berlin, Germany, Apr. 2014.
    (pdf, Bibtex, Video)