Upgrade to Pro — share decks privately, control downloads, hide ads and more …

PhD defense 1/2: Planning, Execution, Representation, and Their Integration for Multiple Moving Agents

PhD defense 1/2: Planning, Execution, Representation, and Their Integration for Multiple Moving Agents

https://kei18.github.io/phd-dissertation
planning & representation parts

More Decks by Keisuke Okumura | 奥村圭祐

Other Decks in Research

Transcript

  1. 20th Jan. 2023 PhD Defense Planning, Execution, Representation, and Their

    Integration for Multiple Moving Agents Keisuke Okumura Tokyo Institute of Technology, Japan ౦ژ޻ۀେֶ 5PLZP*OTUJUVUFPG5FDIOPMPHZ http://bit.ly/3krriSO
  2. /99 2 YouTube/Mind Blowing Videos logistics YouTube/WIRED manufacturing YouTube/Tokyo 2020

    entertainment Swarm { is, will be } necessary in everywhere
  3. /99 3 objective-1 Representation objective-2 Planning Common Knowledge? Cooperation? (increased)

    Uncertainty Execution Navigation for a Team of Agents Who Plans? Huge Search Space
  4. /99 4 integration Representation [AAMAS-22+] building effective roadmaps Execution [AAAI-21,

    ICRA-21, ICAPS-22*, IJCAI-22, AAAI-23+] overcoming uncertainties *Best Student Paper Award Planning ≥1000 agents within 1 sec [IJCAI-19, IROS-21, ICAPS-22*, AIJ-22, AAAI-23+] Research Summary
  5. /99 6 Short-Horizon Planning for MAPF Short-Horizon Planning for Unlabeled-MAPF

    Short-Horizon Planning Guides Long-Horizon Planning Improving Solution Quality by Iterative Refinement 4. 5. 6. 7. Part I. Planning Building Representation from Learning Building Representation while Planning 11. 12. Part III. Representation Online Planning to Overcome Timing Uncertainties Offline Planning to Overcome Timing Uncertainties Offline Planning to Overcome Crash Faults 8. 9. 10. Part II. Execution Introduction, Preliminaries, and Background 1-3. Conclusion and Discussion 13. Dissertation Outline
  6. /99 7 Short-Horizon Planning for MAPF Short-Horizon Planning for Unlabeled-MAPF

    Short-Horizon Planning Guides Long-Horizon Planning Improving Solution Quality by Iterative Refinement 4. 5. 6. 7. Part I. Planning Dissertation Outline
  7. /99 8 Background: Multi-Agent Path Finding (MAPF) given agents (starts)

    graph goals solution paths without collisions optimization is intractable in various criteria [Yu+ AAAI-13, Ma+ AAAI-16, Banfi+ RA-L-17, Geft+ AAMAS-22]
  8. /99 9 Challenge in Planning quality & solvability high low

    effort small large speed & scalability complete optimal incomplete suboptimal keeping solvability & solution quality with small planning effort holy grail
  9. /99 10 Approaches in Planning quality & solvability effort high

    low small large speed & scalability approach-1 [Chap. 4-6] short-horizon planning guides long-horizon planning approach-2 [Chap. 7] iterative refinement planning time (sec) cost / lower bond 300 agents
  10. /99 11 Online Planning to Overcome Timing Uncertainties Offline Planning

    to Overcome Timing Uncertainties Offline Planning to Overcome Crash Faults 8. 9. 10. Part II. Execution Dissertation Outline
  11. /99 12 Challenge in Execution failure demo of synchronous execution

    planning execution reality gaps overcoming uncertainties at runtime
  12. /99 13 Approach in Execution new solution concepts for multi-agent

    path planning timing-robust conventional 2 1 4 3 0 0 1 2 vulnerable to delays formalize, analyze, and solve problems against: timing uncertainties [Chap. 8-9] / crash faults [Chap. 10]
  13. /99 14 Building Representation from Learning Building Representation while Planning

    11. 12. Part III. Representation Dissertation Outline
  14. /99 15 Background: Planning in Continuous Spaces given agents (starts)

    goals workspace necessity of constructing roadmap solution paths without collisions
  15. /99 16 Challenge in Representation dense representation sparse representation produced

    by PRM [Kavraki+ 96] complete optimal incomplete suboptimal building small but promising representation for planning quality & solvability effort high low small large speed & scalability
  16. /99 17 Approaches in Representation quality & solvability effort high

    low small large speed & scalability approach-1 [Chap. 11] learning from planning demonstrations approach-2 [Chap. 12] building representation while planning
  17. /99 18 Short-Horizon Planning for MAPF Short-Horizon Planning Guides Long-Horizon

    Planning 4. 6. Part I. Planning Building Representation from Learning Building Representation while Planning 11. 12. Part III. Representation Rest of Talk: Quick Multi-Agent Path Planning discrete continuous
  18. /99 19 Short-Horizon Planning for MAPF Short-Horizon Planning Guides Long-Horizon

    Planning 4. 6. Part I. Planning Building Representation from Learning Building Representation while Planning 11. 12. Part III. Representation Quick Multi-Agent Path Planning discrete continuous
  19. /99 20 MAPF: Multi-Agent Path Finding given agents (starts) graph

    goals solution paths without collisions optimization is intractable in various criteria [Yu+ AAAI-13, Ma+ AAAI-16, Banfi+ RA-L-17, Geft+ AAMAS-22]
  20. /99 21 quality & solvability effort high low small large

    speed & scalability keeping solvability & solution quality with small planning effort complete optimal incomplete suboptimal keeping solvability & solution quality with small planning effort Challenge in Planning
  21. /99 22        

           solved instances (%) runtime (sec) Results on MAPF Benchmark [Stern+ SOCS-19] 13900 instances - 33 grid maps - random scenario - every 50 agents, up to max. (1000) tested on desktop PC 33 grid maps 0.0% A* [Hart+ 68] example 194x194 |V|=13,214
  22. /99 23        

           solved instances (%) runtime (sec) 0.0% A* [Hart+ 68] 0.4% ODrM* [W agner+ AIJ-15] complete & optimal Results on MAPF Benchmark [Stern+ SOCS-19] 13900 instances - 33 grid maps - random scenario - every 50 agents, up to max. (1000) tested on desktop PC
  23. /99 24        

           solved instances (%) runtime (sec) 0.0% A* [Hart+ 68] 0.4% ODrM* [W agner+ AIJ-15] 8.3% CBS [Sharon+ AIJ-15; Li+ AIJ-21] 10.7% BCP [Lam+ COR-22] Results on MAPF Benchmark 13900 instances - 33 grid maps - random scenario - every 50 agents, up to max. (1000) tested on desktop PC solution complete* & optimal *SC: solution complete: unable to distinguish unsolvable instances comp & optimal
  24. /99 25        

           solved instances (%) runtime (sec) 0.0% A* [Hart+ 68] 0.4% ODrM* [W agner+ AIJ-15] 8.3% CBS [Sharon+ AIJ-15; Li+ AIJ-21] 10.7% BCP [Lam+ COR-22] 30.9% ODrM*-5 [W agner+ AIJ-15] Results on MAPF Benchmark [Stern+ SOCS-19] 13900 instances - 33 grid maps - random scenario - every 50 agents, up to max. (1000) tested on desktop PC comp & bounded sub-opt SC* & optimal *SC: solution complete: unable to distinguish unsolvable instances comp & optimal
  25. /99 26        

           solved instances (%) runtime (sec) 0.0% A* [Hart+ 68] 0.4% ODrM* [W agner+ AIJ-15] 8.3% CBS [Sharon+ AIJ-15; Li+ AIJ-21] 10.7% BCP [Lam+ COR-22] 30.9% ODrM*-5 [W agner+ AIJ-15] 50.5% EECBS-5 [Li+ AAAI-21] Results on MAPF Benchmark [Stern+ SOCS-19] 13900 instances - 33 grid maps - random scenario - every 50 agents, up to max. (1000) tested on desktop PC comp & bounded sub-opt SC & bounded sub-opt SC* & optimal *SC: solution complete: unable to distinguish unsolvable instances comp & optimal
  26. /99 27        

           solved instances (%) runtime (sec) 0.0% A* [Hart+ 68] 0.4% ODrM* [W agner+ AIJ-15] 8.3% CBS [Sharon+ AIJ-15; Li+ AIJ-21] 10.7% BCP [Lam+ COR-22] 30.9% ODrM*-5 [W agner+ AIJ-15] 50.5% EECBS-5 [Li+ AAAI-21] 61.4% PP [Silver AIIDE-05] 80.9% LNS2 [Li+ AAAI-22] comp & bounded sub-opt SC & bounded sub-opt incomp & sub-opt SC* & optimal comp & optimal Results on MAPF Benchmark [Stern+ SOCS-19] 13900 instances - 33 grid maps - random scenario - every 50 agents, up to max. (1000) tested on desktop PC [Stern+ SOCS-19] *SC: solution complete: unable to distinguish unsolvable instances
  27. /99 28 long-horizon (deliberative, offline) planning stage acting sage short-horizon

    (reactive, online) Unblock Me, Google Play Planning Horizon
  28. /99 29 Strategy in Planning quality & solvability effort high

    low small large speed & scalability complete optimal incomplete suboptimal long-horizon (deliberative, offline) short-horizon (reactive, online)
  29. /99 30 Strategy in Planning quality & solvability effort high

    low small large speed & scalability integration short-horizon planning guides long-horizon planning long short
  30. /99 31 Proposed MAPF Algorithms quality & solvability effort high

    low small large speed & scalability PIBT LaCAM LaCAM PIBT
  31. /99 32 Proposed MAPF Algorithms quality & solvability effort high

    low small large speed & scalability PIBT LaCAM
  32. /99 33 planning online planning applicable to lifelong scenarios ≥500

    agents within 50ms ensuring that all agents eventually reach their destinations https://kei18.github.io/pibt2 IJCAI-19 => AIJ-22 Priority Inheritance with Backtracking for Iterative Multi-agent Path Finding KO, Manao Machida, Xavier Defago & Yasumasa Tamura scalable sub-optimal algorithm (PIBT) to solve MAPF iteratively
  33. /99 34 locations at t=1 t=2 t=3 repeat one-timestep prioritized

    planning high low mid How PIBT works – 1/6 … 1 2 3 4 5 6 7 8 9 decision order time-window
  34. /99 36 How PIBT works – 3/6 high low mid

    as high priority inheritance [Sha+ IEEE Trans Comput-90]
  35. /99 37 high low mid How PIBT works – 4/6

    1 3 2 decision order … …
  36. /99 38 How PIBT works – 5/6 high as high

    as high as high as high stuck but still not feasible
  37. /99 39 How PIBT works – 6/6 invalid valid re-plan

    re-plan valid You can move invalid You must re-plan, I will stay introduce backtracking repeat this one-timestep planning until termination
  38. /99 40 Theoretical Result With dynamic priorities,* in biconnected graphs,

    all agents reach their destinations within finite timestep convenient in lifelong scenarios important note: PIBT is incomplete for MAPF unsolvable MAPF, but the theorem holds *s.t. unreached agents eventually get the highest
  39. /99 41 prioritized planning w/distance-based heuristics [Silver AIIDE-05] A* with

    operator decomposition greedy version [Standley AAAI-10] EECBS CBS-based, bounded sub-optimal [Li+ AAAI-21] LNS2 large neighborhood search for MAPF [Li+ AAAI-22] PIBT Performance on one-shot MAPF 25 instances 30sec timeout on desktop PC sufficiently long timestep limit 194x194 four-connected grid sum-of-costs (normalized; min:1)           runtime (sec)          success rate            agents not so bad not so bad blazing fast! worst: 550ms
  40. /99 42 PIBT is great but… agents   

            room-64-64-8 64x64 |V|=3,232            ost003d 194x194 |V|=13,214 agents            random-32-32-20 32x32 |V|=819 agents important note: PIBT is incomplete for MAPF We need one more jump! PIBT 0% success rate in 30sec
  41. /99 43 Proposed MAPF Algorithms quality & solvability effort high

    low small large speed & scalability PIBT LaCAM
  42. /99 44 planning LaCAM: Search-Based Algorithm for Quick Multi-Agent Pathfinding

    KO agents            room-64-64-8            ost003d agents            random-32-32-20 agents success rate in 30sec 100% LaCAM worst: 699ms worst: 394ms worst: 11sec quick & complete algorithm for MAPF (LaCAM; lazy constraints addition search) https://kei18.github.io/lacam AAAI-23
  43. /99 45 … … … … … search node (configuration)

    Vanilla A* for MAPF greedy search: 44 nodes in general: (5^N)xT nodes N: agents, T: depth intractable even with perfect heuristics goal configuration
  44. /99 46 PIBT for MAPF PIBT PIBT PIBT greedy search:

    44 nodes only 4 configurations repeat one-timestep planning until termination use PIBT to guide exhaustive search initial configuration goal configuration
  45. /99 47 … … … … … Concept of LaCAM

    PIBT PIBT PIBT use other MAPF algorihtms to generate a promising configuration configurations are generated in a lazy manner by two-level search scheme exhaustive search but node generation are dramatically reduced => quick & complete MAPF
  46. /99 51 Lazy Constraints Addition e.g., breadth-first search 24th invoke

    Each configuration eventually generates all connected configurations. Theorem: LaCAM is complete discard node from Open configuration generation with
  47. /99 52 sum-of-costs (normalized; min:1)     

         runtime (sec)          success rate            agents prioritized planning w/distance-based heuristics [Silver AIIDE-05] A* with operator decomposition greedy version [Standley AAAI-10] EECBS CBS-based, bounded sub-optimal [Li+ AAAI-21] LNS2 large neighborhood search for MAPF [Li+ AAAI-22] PIBT [Okumura+ AIJ-22] Performance on one-shot MAPF 25 instances, on laptop 30sec timeout sufficiently long timestep limit 194x194 four-connected grid not so bad (diff from PIBT: tie-breaking) LaCAM blazing fast! worst: 699ms perfect!
  48. /99 53        

       success rate in 1000sec         runtime (sec) agents (x1000) 25 instances, timeout of1000 sec, on desctop PC warehouse-20-40-10-2-2, 340x164; |V|=38,756 Performance with x1000 agents perfect! blazing fast! worst: 26sec for 10,000 agents LaCAM wrong thought: “centralized algorithms are not scalable” => NO, the game is changing PIBT demo of 10,000 agents
  49. /99 54 planning Improving LaCAM for Scalable Eventually Optimal Multi-Agent

    Pathfinding KO (under review) LaCAM*: complete algorithm eventually converging to optima improved configuration generator: PIBT with reversed vertex scoring configuration & cost 1 1 2 3 4 5 6 0 before 1 1 2 3 3 2 3 0 after rewriting new edge updated by Dijkstra
  50. /99 55        

           solved instances (%) runtime (sec) 0.0% A* [Hart+ 68] 0.4% ODrM* [W agner+ AIJ-15] 8.3% CBS [Sharon+ AIJ-15; Li+ AIJ-21] 10.7% BCP [Lam+ COR-22] 30.9% ODrM*-5 [W agner+ AIJ-15] 50.5% EECBS-5 [Li+ AAAI-21] 61.4% PP [Silver AIIDE-05] 80.9% LNS2 [Li+ AAAI-22] Results on MAPF Benchmark 13900 instances - 33 grid maps - random scenario - every 50 agents, up to max. (1000) tested on desktop PC comp & bounded sub-opt SC & bounded sub-opt incomp & sub-opt SC* & optimal comp & optimal [Stern+ SOCS-19] *SC: solution complete: unable to distinguish unsolvable instances
  51. /99 56        

           solved instances (%) runtime (sec) 67.4% PIBT [Okumura+ AIJ-22] blazing fast! But room for improvements 13900 instances - 33 grid maps - random scenario - every 50 agents, up to max. (1000) tested on desktop PC comp & bounded sub-opt SC & bounded sub-opt incomp & sub-opt SC* & optimal comp & optimal Results on MAPF Benchmark [Stern+ SOCS-19] *SC: solution complete: unable to distinguish unsolvable instances
  52. /99 57        

           solved instances (%) runtime (sec) complete & eventually optimal! comp & optimal 13900 instances - 33 grid maps - random scenario - every 50 agents, up to max. (1000) tested on desktop PC 99.0% LaCAM* blazing fast! comp & bounded sub-opt SC & bounded sub-opt incomp & sub-opt SC* & optimal comp & optimal Results on MAPF Benchmark [Stern+ SOCS-19] *SC: solution complete: unable to distinguish unsolvable instances
  53. /99 58        

           solved instances (%) runtime (sec) comp & bounded sub-opt SC & bounded sub-opt incomp & sub-opt SC* & optimal comp & optimal Results on MAPF Benchmark [Stern+ SOCS-19] remaining: maze-128-128-1** **I do not care; it will be too optimized for the benchmark 13900 instances - 33 grid maps - random scenario - every 50 agents, up to max. (1000) tested on desktop PC 99.0% LaCAM* complete & eventually optimal! 32/33 maps: all solved in 10sec *SC: solution complete: unable to distinguish unsolvable instances
  54. /99 59 Summary in Planning quality & solvability effort high

    low small large speed & scalability short-horizon planning guides long-horizon planning PIBT LaCAM(*) successfully breaking the trade-off
  55. /99 60 Short-Horizon Planning for MAPF Short-Horizon Planning Guides Long-Horizon

    Planning 4. 6. Part I. Planning Building Representation from Learning Building Representation while Planning 11. 12. Part III. Representation Quick Multi-Agent Path Planning discrete continuous
  56. /99 61 MAPF Definition Again given agents (starts) graph goals

    solution paths without collisions reality is continuous
  57. /99 62 Cool coordination but not efficient! why not move

    diagonally? robots follow grid [Okumura+ ICAPS-22]
  58. /99 64 Multi-Agent Path Planning in Continuous Spaces given agents

    (starts) goals solution paths without collisions finding solutions itself is tremendously challenging [Spirakis+ 84, Hopcroft+ IJRR, Hearn+ TCS-05] workspace
  59. /99 65 artificial potential field sampling-based rule-based goal start Strategies

    to Solve Single-Agent Path Planning in Continuous Spaces constructing roadmap
  60. /99 66 SBMP: Sampling-Based Motion Planning state of robot: (x,

    y) (x, y) should be in this region configuration space random sampling & construct roadmap pathfinding on roadmap same scheme even in high-dimension
  61. /99 67 Naïve Strategy to Solve Multi-Agent Path Planning in

    Continuous Spaces Construct agent-wise roadmaps by SBMP (sampling-based motion planning) methods Solve MAPF on those roadmaps 1. 2. two-phase planning
  62. /99 68 Pitfall – There is a trade-off dense representation

    sparse representation produced by PRM [Kavraki+ 96] complete optimal incomplete suboptimal ideal: small but promising representation for planning quality & solvability effort high low small large speed & scalability
  63. /99 69 Countermeasure biased sampling sampling from important region of

    each agent how to identify? agent-specific features + interactions between agents 🤔 design manually?
  64. /99 70 Countermeasure biased sampling sampling from important region of

    each agent how to identify? agent-specific features + interactions between agents This is machine learning problem! supervised learning: planning demonstration as training data
  65. /99 71 quality & solvability effort high low small large

    speed & scalability learning from planning demonstrations CTRMs: Learning to Construct Cooperative Timed Roadmaps for Multi-agent Path Planning in Continuous Spaces KO,* Ryo Yonetani, Mai Nishimura & Asako Kanezaki representation https://omron-sinicx.github.io/ctrm AAMAS-22 *work done as an intern at OMRON SINIC X MAPF algorithm
  66. /99 72 𝐹!"#$ model training instances & solutions predict next

    locations MAPF algorithm new instance 𝐹!"#$ random walk sampling module next locations for all agents starts path generation compositing solution … t=0 t=1 t=2 CTRMs Workflow Online Inference Offline Training CVAE: Conditional Variational Autoencoder [Sohn+ NeurIPS-15] +importance sampling [Salzmann+ ECCV-20] +multi-agent attention [Hoshen NeurIPS-17] 𝐹"#$% :
  67. /99 𝐹"#$% 73 next position instance & solution occupancy cost-to-go

    env. info Offline Training & Model Arch. [Sohn+ NeurIPS-15] CVAE features ?
  68. /99 𝐹"#$% 74 + + goal-driven features relative positions, size,

    speeds, etc Offline Training & Model Arch.
  69. /99 𝐹"#$% 77 + + + + go right [0,0,1]

    next position goal-driven features comm. features indicator feature instance & solution occupancy cost-to-go env. info relative positions, size, speeds, etc attention Offline Training & Model Arch. [Sohn+ NeurIPS-15] CVAE [Hoshen NeurIPS-17]
  70. /99 78 observations for agent-i next predicted location for agent-i

    trained model likely to be used by planners Online Inference
  71. /99 79 Online Inference timestep t timestep t+1 next predicted

    locations for all agents observations for all agents
  72. /99 80 Online Inference 0 1 2 T T-1 …

    initial locations timed path for agent-i each path is agent-specific and cooperative hyperparameter
  73. /99 81 … … … … compositing 0 1 2

    T T-1 timed roadmap for agent-i each roadmap is agent-specific and cooperative hyperparameter: #(path generation) Online Inference
  74. /99 82 SPARS [Dobson & Bekris, IJRR-14] (random) simplified PRM

    [Karaman & Frazzoli, IJRR-11] square as agent-specific roadmaps grid as used in MAPF studies CTRMs 20-30 homogeneous agents corresponding to 32x32 grids Roadmap Visualization CTRMs produce small but effective roadmaps specific to each agent
  75. /99 83 Quantitative Results S R P R R R

    50 RP 3 5 T 20-30 homogeneous agents corresponding to 32x32 grids 100 instances solved by prioritized planning [Silver, AIIDE-05, Van Den Berg & Overmars IROS-05, etc] CTRMs reduce planning effort while keeping solution qualities params of CTRMs: #(path generations) denser denser
  76. /99 84 50 RP 3 5 T P R R

    P S S 20-30 homogeneous agents corresponding to 32x32 grids 100 instances solved by prioritized planning [Silver, AIIDE-05, Van Den Berg & Overmars IROS-05, etc] CTRMs achieve efficient path-planning from the end-to-end perspective Roadmap construction can be much faster. Check our latest implementation: https://github.com/omron-sinicx/jaxmapp Quantitative Results quick! denser
  77. /99 85 Strategy in Representation quality & solvability effort high

    low small large speed & scalability learning from planning demonstrations building representation while planning
  78. /99 86 Naïve Strategy to Solve Multi-Agent Path Planning in

    Continuous Spaces Are roadmaps truly used in planning process? Construct agent-wise roadmaps by SBMP (sampling-based motion planning) methods Solve MAPF on those roadmaps 1. 2. two-phase planning decoupled
  79. /99 87 Advanced Strategy to Solve Multi-Agent Path Planning in

    Continuous Spaces develop agent-wise roadmaps by SBMP according to MAPF search progress coupled
  80. /99 88 Quick Multi-Robot Motion Planning by Combining Sampling &

    Search KO & Xavier Defago (under review) planning representation https://kei18.github.io/sssp algorithm (SSSP) to solve multi-robot motion planning quickly by simultaneously performing roadmap construction & collision-free pathfinding
  81. /99 89 MRMP: Multi-Robot Motion Planning MAPF is a special

    case of MRMP solution: collision-free trajectories each agent has its own configuration space blackbox utility functions connect free space true false steer collide true false sample dist 0.18
  82. /99 90 Proposed Algorithm: SSSP 0 1 2 0 1

    2 3 0 1 2 3 4 5 0 1 2 2 3 5 0 1 2 3 4 0 1 4 00 00 10 20 40 50 00 50 51 53 54 00 10 20 40 50 00 vertex expansion & goals new vertices closed next action exhaustive search while constructing roadmaps by random walks +many tricks inspired by SBMP & MAPF studies [MAPF; Standley AAAI-10] [SBMP; Hsu+ ICRA-97] search-node expansion
  83. /99 91 Theoretical Result SSSP eventually finds a solution for

    solvable instances c.f., probabilistic completeness in SBMP
  84. /99 92 R P 3 0 6 33 6663 Point2d

    DOF: 2N Point3d DOF: 3N R P 33 6 6663 Line2d DOF: 3N R P 3 0 33 6 6663 Capsule3d DOF: 6N R P 3 0 33 6 6663 Arm22 DOF: 2N Arm33 DOF: 6N R P 3 0 33 6 6663 Dubins2d DOF: 3N R P 6 33 6663 Snake2d DOF: 6N Performance of SSSP very promising quick!
  85. /99 93 Summary in Representation quality & solvability effort high

    low small large speed & scalability learning from planning demonstrations building representation while planning successfully breaking the trade-off again
  86. /99 TODAY 95 Planning Representation Execution integration [AAMAS-22+] building effective

    roadmaps [AAAI-21, ICRA-21, ICAPS-22, IJCAI-22, AAAI-23+] overcoming uncertainties ≥1000 agents within 1 sec [IJCAI-19, IROS-21, ICAPS-22, AIJ-22, AAAI-23+] Research Summary
  87. /99 96 Impact of Research Results breaking common belief of

    trade-off (quality vs speed) connecting discrete & continuous spaces => promoting various types of automation develop new horizons of quick multi-agent path planning contribution:
  88. /99 97 YouTube/Mind Blowing Videos warehouse YouTube/StarCraft video game [Flatland

    Challenge, AIcrowd] railway/airport operations [Morris+ AAAIW-16] [Le Goc+ UIST-16] robotic interface [Song+ ICCBB-01] drug discovery [Zhang+ 18] manufacturing [van den Berg+ ISRR-11] crowd simulation [Zhang+ SIGGRAPH-20] puzzle solving pipe routing [Belov+ SOCS-20] Impact of Research Results develop new horizons of quick multi-agent path planning contribution:
  89. /99 98 Future Direction integration LaCAM(*) PIBT planning in discretized

    spaces SSSP CTRMs planning in continuous spaces establish practical methodologies to MRMP iterative refinement robust execution scalable & quick probabilistically complete asymptotically optimal, anytime with kinodynamic constraints
  90. /99 100 Publications 1. “Improving LaCAM for Scalable Eventually Optimal

    Multi-Agent Pathfinding.” KO (under review) 2. “Quick Multi-Robot Motion Planning by Combining Sampling and Search.” KO & Xavier Défago. (under review) 3. “LaCAM: Search-Based Algorithm for Quick Multi-Agent Pathfinding.” KO. AAAI. 2023. 4. “Fault-Tolerant Offline Multi-Agent Path Planning.” KO & Sebastien Tixeuil. AAAI. 2023. 5. “Priority Inheritance with Backtracking for Iterative Multi-agent Path Finding.” KO, Manao Machida, Xavier Défago & Yasumasa Tamura. Artificial Intelligence (AIJ). 2022 (previously presented at IJCAI-19) 6. “Offline Time-Independent Multi-Agent Path Planning.” KO, François Bonnet, Yasumasa Tamura & Xavier Défago. IJCAI. 2022. (extended version is under review at T-RO) 7. “Solving Simultaneous Target Assignment and Path Planning Efficiently with Time-Independent Execution.” KO & Xavier Défago. ICAPS. 2022. (best student paper award, extended version is under review at AIJ) 8. “CTRMs: Learning to Construct Cooperative Timed Roadmaps for Multi-agent Path Planning in Continuous Spaces.” KO, Ryo Yonetani, Mai Nishimura & Asako Kanezaki. AAMAS. 2022. 9. “Iterative Refinement for Real-Time Multi-Robot Path Planning.” KO, Yasumasa Tamura & Xavier Défago. IROS. 2021. Others: 10. “Active Modular Environment for Robot Navigation.” Shota Kameyama, KO, Yasumasa Tamura & Xavier Défago. ICRA. 2021. 11. “Time-Independent Planning for Multiple Moving Agents.” KO, Yasumasa Tamura & Xavier Défago. AAAI. 2021. 12. “Roadside-assisted Cooperative Planning using Future Path Sharing for Autonomous Driving.” Mai Hirata, Manabu Tsukada, KO, Yasumasa Tamura, Hideya Ochiai & Xavier Défago. VTC. 2021. 13. “winPIBT: Extended Prioritized Algorithm for Iterative Multi-agent Path Finding.” KO, Yasumasa Tamura & Xavier Défago. WoMAPF. 2021. 14. “Amoeba Exploration: Coordinated Exploration with Distributed Robots.” KO, Yasumasa Tamura & Xavier Défago. iCAST. 2018. under review peer-reviewed