Upgrade to Pro — share decks privately, control downloads, hide ads and more …

[RO-MAN25] Take That for Me: Multimodal Exophor...

[RO-MAN25] Take That for Me: Multimodal Exophora Resolution with Interactive Questioning for Ambiguous Out-of-View Instructions

Avatar for Shoichi Hasegawa

Shoichi Hasegawa

August 26, 2025
Tweet

More Decks by Shoichi Hasegawa

Other Decks in Research

Transcript

  1. 7DNH7KDWIRU0H 0XOWLPRGDO([RSKRUD5HVROXWLRQZLWK,QWHUDFWLYH 4XHVWLRQLQJIRU$PELJXRXV2XWRI9LHZ,QVWUXFWLRQV $NLUD2\DPD6KRLFKL+DVHJDZD $NLUD7DQLJXFKL <RVKLQREX+DJLZDUD7DGDKLUR7DQLJXFKL 5LWVXPHLNDQ 8QLY 6RND 8QLY

    .\RWR8QLY LVDFRUUHVSRQGLQJDXWKRU ,(((,QWHUQDWLRQDO&RQIHUHQFHRQ5RERW +XPDQ,QWHUDFWLYH&RPPXQLFDWLRQ ,(((520$1 6HVVLRQ//0*HQ$,%DVHG0XOWLPRGDO0XOWLOLQJXDODQG0XOWLWDVN0RGHOLQJ7HFKQRORJLHVIRU5RERWLF6\VWHPV  &(67
  2.  ([DPSOH %REJRWDJDPHPDFKLQH +HSOD\VZLWKLWHYHU\GD\ [1] R. Le Nagard et al.

    “Aiding Pronoun Translation with Co-Reference Resolution.” Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR, pp.252 261, 2010. [2] SM. Park et al. “Visual Language Integration: A Survey and Open Challenges.” Computer Science Review, Vol.48, pp.100548, 2023. 7DNHWKDW IRUPH 5HVHDUFK%DFNJURXQG 'HPRQVWUDWLYHV DUHIUHTXHQWO\XVHGLQRXUGDLO\OLYHV :HSHUIRUPDQDSKRUDUHVROXWLRQWRXQGHUVWDQGZKDWGHPRQVWUDWLYHVUHIHUWR )LQGWKHSHRSOHRUREMHFWVWKDWFRUUHVSRQGWR WKHGHPRQVWUDWLYHVDQGSURQRXQVLQWKHWH[W )LQGWKHSHRSOHRUREMHFWVWKDWFRUUHVSRQGWR GHPRQVWUDWLYHVDQGSURQRXQVXVHGLQVSHHFKIURP WKHHQYLURQPHQW ([RSKRUD5HVROXWLRQ>@ (QGRSKRUD5HVROXWLRQ>@ ,WLVQHFHVVDU\WRSHUIRUPH[RSKRUDUHVROXWLRQWRHQDEOHURERWVWRH[HFXWHWDVNVEDVHGRQTXHULHV VXFKDVn7DNHWKDWIRUPH|RUn*HWPHWKDWFXS|
  3. 3UHYLRXV5HVHDUFK  [3] LH. Lin et al. “Gesture-Informed Robot Assistance

    via Foundation Models.” CoRL, 2023. [4] A. Oyama et al. “ECRAP: Exophora Resolution and Classifying User Commands for Robot Action Planning by Large Language Models.” IEEE IRC, pp.1–8, 2024. [5] A. Oyama et al. “Exophora Resolution of Linguistic Instructions with a Demonstrative based on Real-World Multimodal Information.” IEEE RO-MAN, pp. 2617–2623, 2023, *,5$)>@ $FWLRQSODQQLQJIURPLPDJHVFRQWDLQLQJJHVWXUHV EDVHGRQTXHULHVZLWKGHPRQVWUDWLYHV 3UREOHP %RWKXVHUJHVWXUHVDQGWDUJHWREMHFWVPXVWEH FRQWDLQHGZLWKLQWKHLPDJH (&5$3>@ &RPELQLQJH[RSKRUDUHVROXWLRQ>@WDVNFODVVLILFDWLRQDQG //0EDVHGDFWLRQSODQQLQJIURPYDULRXVTXHULHV 3UREOHPV • 1RKDQGOHFDVHRIWKHXVHULVLQYLVLEOHIURPURERW • 1RKDQGOHYLVXDODWWULEXWHVLQWKHTXHU\ FRORURUVKDSH • 1RKDQGOHFDVHRIWKHTXHU\ODFNVWKHWDUJHWFODVV
  4. &KDOOHQJHVDQG6ROXWLRQV  ʰ3LQNFDQʱ • 8VHULVLQYLVLEOHIURPURERW ˢ3HUIRUPVRXQGVRXUFHORFDOL]DWLRQ 66/ DQGREWDLQVNHOHWDO LQIRUPDWLRQIDFLQJWKHXVHU VGLUHFWLRQ

    • /DFNRIYLVXDODWWULEXWHVLQWKHTXHU\ ˢ3HUIRUPH[RSKRUDUHVROXWLRQWKDWFRQVLGHUUHIHUULQJH[SUHVVLRQV XVLQJWKHYLVLRQODQJXDJHPRGHO 9/0 >@ • /DFNRIWKHWDUJHWFODVVLQWKHTXHU\ ˢ5RERWVVXSSOHPHQWPLVVLQJLQIRUPDWLRQE\ DVNLQJXVHUVTXHVWLRQV %ULQJPH WKDW :KDWREMHFWV VKRXOG,EULQJ" n7KDW|LV XQFOHDU [6] A. Radford et al. “Learning Transferable Visual Models from Natural Language Supervision.” ICML, pp.8748–8763, 2021.
  5. 5HVHDUFK3XUSRVH  (a) Sound Source Localization (b) Exophora Resolution From

    nthat| and pointing, Task Complete Brown. . (c) Interactive Questioning What color is the object? Cup p Cup Cup (Target) Bottle Doll Take that for me. Which object is nthat|? 7RDFKLHYHUREXVWUHDOZRUOGH[RSKRUDUHVROXWLRQWKDWLVUHVLOLHQWWRLQFRPSOHWHREVHUYDWLRQDOGDWD
  6. 3URSRVHG0RGHO0,(/  Semantic Map Label Features Visual Features Linguistic Query

    “Take that cup to kitchen” Demonstrative SentenceBERT CLIP’s Text Encoder “that” RGB-D Image Skeleton Detection Demonstrative Region-based Estimator Objects Position Pointing Direction -based Estimator User Body Coordinates Linguistic Query-based Estimator User’s Speech Sound Source Localization Target Object Probability [0.14, 0.41, … , 0.01, 0.02] User Direction GPT-4o Interactive Questioning Identify Target Object ID: 2 Class: Cup Probability: 0.41 User’s Answer “The color of the cup is brown.” 0XOWLPRGDO,QWHUDFWLYH([RSKRUDUHVROXWLRQZLWKXVHU/RFDOL]DWLRQ 0,(/
  7. /LQJXLVWLF4XHU\EDVHG(VWLPDWRU  6HPDQWLF0DS /DEHO)HDWXUHV 9LVXDO)HDWXUHV 6HQWHQFH%(57 &/,3WH[W HQFRGHU Cosine similarity

    n7DNHWKDWFXSWRNLWFKHQ| Cosine similarity ⊗ Norm 7DUJHW3UREDELOLW\( ) >ʞ@ [6] A. Radford et al. “Learning Transferable Visual Models from Natural Language Supervision.” ICML, pp.8748–8763, 2021. [7] B. Chen et al. “Open-Vocabulary Queryable Scene Representations for Real World Planning.” IEEE ICRA, pp.11509–11522, 2023. [8] N. Reimers et al. “Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks.” EMNLP-IJCNLP, pp.3982-3992, 2019. • &RQVWUXFWHGDVHPDQWLFPDS>@DQGUHFRUGHGWKHSRVLWLRQLPDJHIHDWXUHVDQGFODVVIHDWXUHVRI WKHREMHFWV • &DOFXODWHGWKHFRVLQHVLPLODULW\EHWZHHQWKHREMHFWIHDWXUHV VHPDQWLFPDS DQGWKHIHDWXUHV LQVWUXFWLRQV WKHQFDOFXODWHWKHWDUJHWREMHFWSUREDELOLW\ %\XVLQJ6HQWHQFH%(57 >@ DQG&/,3>@RXUPRGHOFDQHVWLPDWHWDUJHWVWKDWUHIOHFWWKHFRORUDQG VKDSHRIREMHFWV
  8. 3RLQWLQJ'LUHFWLRQEDVHG(VWLPDWRU  2XUPRGHOFDQGHDOZLWKVLWXDWLRQV ZKHUHWKHUHDUHQRXVHUVLQWKH URERW VILHOGRIYLVLRQ 6RXQGVRXUFHORFDOL]DWLRQ 66/  IRUHVWLPDWLQJXVHUGLUHFWLRQ

    6NHOWRQGHWHFWLRQ 0HGLD3LSH >@ 7XUQLQJWRZDUGWKHXVHU &DOFXODWLRQRIWDUJHWREMHFWSUREDELOLW\ [9] C. Lugaresi, et al. “MediaPipe: A Framework for Building Perception Pipelines.” arXiv preprint arXiv:1906.08172, 2019. • &DOFXODWHWKHDQJOHEHWZHHQSRLQWLQJ EOXH DQGREMHFWGLUHFWLRQ UHG • 2XWSXWSUREDELOLW\GHQVLW\RI'YRQ 0LVHVGLVWULEXWLRQZLWKWKHREWDLQHG DQJOHV
  9. 3UHGLFWLRQRI7DUJHW2EMHFWZLWK,QWHUDFWLYH4XHVWLRQLQJ  4XHVWLRQ*HQHUDWLRQ 8VHU Red. … ([RSKRUD 5HVROXWLRQ “What color

    is the target object?” Red. /LQJXLVWLF4XHU\n7DNHWKDWFXSWRWKHNLWFKHQ| ([RSKRUD5HVROXWLRQ 5HVXOW 7RS ID: 1 Class: Bottle Probability: 0.24 ID: 2 Class: Cup Probability: 0.26 ID: 37 Class: Bottle Probability: 0.13 ,GHQWLI\7DUJHW2EMHFW ID: 2 Class: Cup Probability: 0.26 F *37R D *37R E *37R • %\DGGLQJWKHUHVXOWVRIH[RSKRUDUHVROXWLRQLQWRWKH//0RXUPRGHOFDQFRQVLGHUWKHDPELJXLW\ • ,IWKHWDUJHWFDQQRWEHLGHQWLILHGWKH//0FDQDVNWKHXVHUTXHVWLRQVWRUHVROYHWKHDPELJXLW\
  10. &RQGLWLRQV  &DVHRIZKHQWKHXVHULVYLVLEOH IURPWKHURERW VSRVLWLRQ 5RERW2ULHQWDWLRQ 3UHSDUHGRULHQWDWLRQVIRUFDVHVZKHUHWKH XVHULVYLVLEOH RULQYLVLEOH WRWKHURERWIURP

    LWVSRVLWLRQ &DVHRIZKHQWKHXVHULVLQYLVLEOH IURPWKHURERW VSRVLWLRQ /LQJXLVWLF4XHU\ 'HPRQVWUDWLYH 2EMHFW&ODVV :RUGRI2EMHFW)HDWXUH ([DPSOH /HYHO 㾐 㾐 㾐 %ULQJPHWKDW EOXH SODVWLFERWWOH /HYHO 㾐 㾐 %ULQJPHWKDW SODVWLFERWWOH /HYHO 㾐 %ULQJPHWKDW 'LYLGHGTXHULHVLQWRWKUHHOHYHOVDQG PDQXDOO\FUHDWHGHDFKTXHULHVLQ-DSDQHVH
  11.  [10] J. Hu et al. “VGPN: Voice-Guided Pointing Robot

    Navigation for Humans.” IEEE ROBIO, pp.1107–1112, 2018. [4] A. Oyama et al. “ECRAP: Exophora Resolution and Classifying User Commands for Robot Action Planning by Large Language Models.” IEEE IRC, pp.1–8, 2024. &RPSDULVRQ0HWKRGVDQG(YDOXDWLRQ0HWULF 8VHU 3RLQWLQJ 2EMHFW &ODVV 'HPRQVWUDWLYH ,QWHUDFWLYH 4XHVWLRQLQJ 9*31>@ 㾐 㾐 (&5$3>@ 㾐 㾐 㾐 +XPDQ 㾐 㾐 㾐 㾐 +XPDQ ZR,QWHUDFWLRQ 㾐 㾐 㾐 0,(/ RXU 㾐 㾐 㾐 㾐 E d t m  = 1 7ULDOV 6XFFHVVRU)DOVH RU
  12. • ,Q/HYHO0,(/ VXSSOHPHQWLQIRUPDWLRQODFNLQJLQTXHULHVXVLQJ,QWHUDFWLYHTXHVWLRQLQJ • 0,(/VKRZHGDQ65 7RS DSSUR[LPDWHO\WRWLPHVKLJKHUWKDQ(&5$3DQG9*31 ˢ,WLVHIIHFWLYHWRVXSSOHPHQWPLVVLQJVNHOHWDOLQIRUPDWLRQWKURXJK66/ 65 7RS

    65 7RS 0HWKRG /HYHO /HYHO /HYH 7RWDO /HYHO /HYHO /HYHO 7RWDO 9*31>@         (&5$3>@         0,(/ RXU         +XPDQ         +XPDQ ZR,QWHUDFWLRQ          × 4.8 5HVXOWV&DVHRI8VHULV,QYLVLEOHIURPWKH5RERWbV3RVLWLRQ [10] J. Hu et al. “VGPN: Voice-Guided Pointing Robot Navigation for Humans.” IEEE ROBIO, pp.1107–1112, 2018. [4] A. Oyama et al. “ECRAP: Exophora Resolution and Classifying User Commands for Robot Action Planning by Large Language Models.” IEEE IRC, pp.1–8, 2024. × 2.0
  13. $OWKRXJKLWZDVGLIILFXOWWRLGHQWLI\WKHWDUJHWREMHFWZLWKnWKLV|DORQH WKHURERWFRXOGREWDLQLQIRUPDWLRQDERXWWKHWDUJHWREMHFWFODVVE\DVNLQJWKHXVHUTXHVWLRQV  7DNHWKLVWRWKHGLQLQJURRP      7RSFDQGLGDWHWDUJHWREMHFWV

         :KLFKREMHFW" 5HPRWH &RQWUROOHULV 7DUJHW ([DPSOHRI6XFFHVVIXO3UHGLFWLRQRI7DUJHWVYLD,QWHUDFWLRQ ZR,QWHUDFWLYH4XHVWLRQLQJ Z,QWHUDFWLYH4XHVWLRQLQJ ([RSKRUD 5HVROXWLRQ
  14.  &RQFOXVLRQ 6XPPDU\ • 3URSRVDORI0,(/ • /HYHUDJHV66/DQGLQWHUDFWLYHTXHVWLRQLQJE\//0WRDFKLHYHUREXVWH[RSKRUDUHVROXWLRQ • 5HVXOWV •

    6KRZHG65 7RS WLPHVKLJKHUWKDQEDVHOLQHZKHQWKHURERWFDQVHHWKHXVHU • 6KRZHG65 7RS WRWLPHVKLJKHUWKDQEDVHOLQHZKHQWKHXVHULVRXWRIVLJKW )XWXUH:RUNV • 'HJUDGDWLRQRIVHPDQWLFPDSTXDOLW\GXHWRORZUHVROXWLRQLPDJHV ˲$SSO\LQJGLIIXVLRQPRGHOV • +DQGOLQJG\QDPLFREMHFWDUUDQJHPHQWV ˲,QWHJUDWLQJDUHDOWLPHVHPDQWLFPDSXSGDWHPHWKRG 'LIIXVLRQ 0RGHOV
  15. 3UHYLRXV5HVHDUFK  [3] LH. Lin et al. “Gesture-Informed Robot Assistance

    via Foundation Models.” CoRL, 2023. [4] A. Oyama et al. “ECRAP: Exophora Resolution and Classifying User Commands for Robot Action Planning by Large Language Models.” IEEE IRC, pp.1–8, 2024. *,5$)>@ $FWLRQSODQQLQJIURPLPDJHVFRQWDLQLQJJHVWXUHV EDVHGRQTXHULHVZLWKGHPRQVWUDWLYHV 3UREOHP %RWKXVHUJHVWXUHVDQGWDUJHWREMHFWVPXVWEH FRQWDLQHGZLWKLQWKHLPDJH (&5$3>@ &RPELQLQJH[RSKRUDUHVROXWLRQWDVNFODVVLILFDWLRQDQG //0EDVHGDFWLRQSODQQLQJIURPYDULRXVTXHULHV 3UREOHPV • 1RKDQGOHFDVHRIWKHXVHULVLQYLVLEOHIURPURERW • 1RKDQGOHYLVXDODWWULEXWHVLQWKHTXHU\ FRORURUVKDSH • 1RKDQGOHFDVHRIWKHTXHU\ODFNVWKHWDUJHWFODVV
  16. 'HPRQVWUDWLYH5HJLRQEDVHG(VWLPDWRU  • (DFKUHJLRQVE\'*DXVVLDQGLVWULEXWLRQ XVLQJWKHGLIIHUHQWFKDUDFWHUVRIGHPRQVWUDWLYHVHULHV • 5RERWREWDLQH\H DQGZULVW FRRUGLQDWHVE\0HGLD3LSH VNHOHWRQGHWHFWRU

    >@ • &RRUGLQDWHV H\HDQGZULVW DUHXVHGDVSDUDPHWHUVRI'*DXVVLDQGLVWULEXWLRQ -DSDQHVHGHPRQVWUDWLYH &KDUDFWHU FRt VHULHV 5HIHUULQJQHDUWKHVSHDNHU HJKXPDQ VRt VHULHV 5HIHUULQJQHDUWKHOLVWHQHU HJURERW Dt VHULHV 5HIHUULQJWRDORFDWLRQIDUIURPERWK [10] C. Lugaresi, et al. “MediaPipe: A Framework for Building Perception Pipelines.” arXiv preprint arXiv:1906.08172, 2019.
  17. • ˖8VHUSRVLWLRQ • ˖5RERW3RVLWLRQ • 5RERWFUHDWHG1/0DS >@ WKURXJKSUHH[SORUDWLRQLQ DQHQYLURQPHQW ˢ'HWHFWHGREMHFWVRI

    FODVVHV LQLPDJHV  (QYLURQPHQW6HWWLQJV [8] B. Chen et al. “Open-Vocabulary Queryable Scene Representations for Real World Planning,” IEEE ICRA, pp.11509–11522, 2023.
  18. • 9*31>@ • ,GHQWLI\DWDUJHWREMHFWEDVHGRQWZRSLHFHVRILQIRUPDWLRQXVHUSRLQWLQJ DQGREMHFWFODVV • (&5$3>@ • 3HUIRUPH[RSKRUDUHVROXWLRQEDVHGRQXVHU SRLQWLQJREMHFWFODVVDQGGHPRQVWUDWLYH

    • +XPDQ+XPDQ ZR,QWHUDFWLRQ • ,IWKHVSHDNHULVQRWYLVLEOHLWLVSRVVLEOHWRIDFHWKHGLUHFWLRQRIWKHVSHDNHU • ,IWKHWDUJHWFDQQRWEHLGHQWLILHGLWLVSRVVLEOHWRDVNWKHVSHDNHUTXHVWLRQV XSWRRQHWLPH  • +XPDQ ZR,QWHUDFWLRQ VKRZVWKHUHVXOWVZKHQSUHGLFWLQJWKHWDUJHWREMHFWZLWKRXWDVNLQJ TXHVWLRQV  [11] J. Hu et al. “VGPN: Voice-Guided Pointing Robot Navigation for Humans.” IEEE ROBIO, pp.1107–1112, 2018. [6] A. Oyama et al. “ECRAP: Exophora Resolution and Classifying User Commands for Robot Action Planning by Large Language Models.” IEEE IRC, pp.1–8, 2024. &RPSDULVRQ0HWKRGV
  19. 65 7RS 65 7RS 0HWKRG /HYHO /HYHO /HYHO 7RWDO /HYHO

    /HYHO /HYHO 7RWDO 9*31>@         (&5$3>@         0,(/ RXU         +XPDQ         +XPDQ ZR,QWHUDFWLRQ          x 1.2 x 0.54 • 0,(/ VKRZHGWLPHVKLJKHUSHUIRUPDQFHWKDQ(&5$3DERXW65 7RS • ,Q/HYHOTXHULHV0,(/FRXOGVXSSOHPHQWLQIRUPDWLRQODFNLQJLQODQJXDJHTXHULHVXVLQJ+5, • &RPSDUHGWR+XPDQDQG+XPDQ ZR,QWHUDFWLRQ 0,(/DFKLHYHGDSSUR[LPDWHO\65 7RS  [11] J. Hu et al “VGPN: Voice-Guided Pointing Robot Navigation for Humans,” IEEE ROBIO, pp.1107–1112, 2018. [6] A. Oyama et al. ECRAP: Exophora Resolution and Classifying User Commands for Robot Action Planning by Large Language Models. IEEE IRC, pp.1–8, 2024. R   = 1 7ULDOV 6XFFHVVRU)DOVH RU 5HVXOWV&DVHRIXVHULVYLVLEOHIURPWKHURERW VSRVLWLRQ
  20. • (&5$3FDQQRWSHUIRUPH[RSKRUD UHVROXWLRQZKHQWKHUHLVQRLQIRUPDWLRQRQREMHFWFODVVHVRU VNHOHWDOLQIRUPDWLRQQRQXPHULFDOYDOXHVDUHJLYHQ • 0,(/VKRZHGDQ65 7RS DSSUR[LPDWHO\WRWLPHVKLJKHUWKDQ(&5$3DQG9*31 ˢ,WLVHIIHFWLYHWRVXSSOHPHQWPLVVLQJVNHOHWDOLQIRUPDWLRQWKURXJK66/ 65

    7RS 65 7RS 0HWKRG /HYHO /HYHO /HYHO 7RWDO /HYHO /HYHO /HYHO 7RWDO 9*31>@         (&5$3>@         0,(/ RXU         +XPDQ         +XPDQ ZR,QWHUDFWLRQ          × 2.0 × 4.8 5HVXOWV&DVHRIXVHULVLQYLVLEOHIURPWKHURERW VSRVLWLRQ       PWKHURERW VSRVLW = 1 65 7  7ULDOV 6XFFHVVRU)DOVH RU [11] J. Hu et al “VGPN: Voice-Guided Pointing Robot Navigation for Humans,” IEEE ROBIO, pp.1107–1112, 2018. [6] A. Oyama et al. ECRAP: Exophora Resolution and Classifying User Commands for Robot Action Planning by Large Language Models. IEEE IRC, pp.1–8, 2024.
  21.  1RVLJQLILFDQWGLIIHUHQFHEHWZHHQ/HYHODQG/HYHOTXHULHV ˢ5HIHUULQJH[SUHVVLRQVQRWWDNHQLQWRDFFRXQW" /LPLWDWLRQV 7KHUHVHHPVWREHDSUREOHPZLWKWKHLPDJHIHDWXUHV 7KHERXQGLQJER[RIWKHREMHFWLVH[WUDFWHGIURPORZ UHVROXWLRQURERWLPDJHVUHVXOWLQJLQHYHQORZHU UHVROXWLRQ $FWXDOUHFWDQJXODULPDJHXVHG 0HWKRG

    8VHULVYLVLEOH 65 7RS 8VHULVQRWYLVLEOH 65 7RS 66/ +5, /HYHO /HYHO /HYHO 7RWDO /HYHO /HYHO /HYHO 7RWDO 㾐         㾐         㾐 㾐        
  22. )XWXUH:RUNV  • ,QFUHDVLQJ8VHU%XUGHQGXHWR,QWHUDFWLYH4XHVWLRQLQJ • :HDGGUHVVE\UHILQLQJWKHTXHVWLRQJHQHUDWLRQPHWKRGWRPDLQWDLQWKHQXPEHURITXHVWLRQV ZKLOHDFKLHYLQJWKHGHVLUHGUHVXOWV • 'HJUDGDWLRQRI6HPDQWLF0DS4XDOLW\GXHWR/RZ5HVROXWLRQ2EMHFW,PDJHV •

    :HWDFNOHE\DSSO\LQJGLIIXVLRQPRGHOV • +DQGOLQJ'\QDPLF2EMHFW$UUDQJHPHQWV • :HDGGUHVVE\LQWHJUDWLQJDUHDOWLPHVHPDQWLFPDSXSGDWHPHWKRG • ([RSKRUD5HVROXWLRQLQ1RLV\(QYLURQPHQWV • :HVROYHE\XVLQJVRXQGVRXUFHVHSDUDWLRQWRGLVWLQJXLVKDVSHFLILFSHUVRQbVYRLFHIURP EDFNJURXQGQRLVH
  23. 6SHFLILFDWLRQRI3&  3&(QYLURQPHQW • &38ʁ,QWHO&RUHL.) • 5$0ʁ*% • *38ʁ*%19,',$*H)RUFH57; •

    6RIWZDUH'HYHORSPHQW(QYLURQPHQW>@ [1] L. El Hafi et al. “Software Development Environment for Collaborative Research Workflow in Robotic System Integration.” Advanced Robotics, 2022.