Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Interactive and Active Learning in Social Robots

Interactive and Active Learning in Social Robots

Slides of my PhD Thesis, entitled "Interactive and Active Learning in Social Robots.

Videos, code, datasets, etc. can be downloaded at:
https://github.com/VGonPa/interactive_and_active_learning

Victor González-Pacheco

December 03, 2015
Tweet

More Decks by Victor González-Pacheco

Other Decks in Research

Transcript

  1. INTERACTIVE AND ACTIVE LEARNING FOR SOCIAL ROBOTICS Víctor González Pacheco

    Advisers: Miguel Ángel Salichs María Malfaz Leganés, November 2015
  2. Poses Objects Poses Objects Novelties Introduction Part I: Interactive Learning

    Part II: Interactive and Active Learning Conclusions
  3. OPEN CHALLENGES Interaction During Learning Label acquisition Deciding when to

    finish the training Detecting novel concepts that require more training 6 Generally very simple Complex task Why not let the robot take this decision? Novelty detection in robotics
  4. OBJECTIVES OF THE THESIS “To develop a system that enables

    a social robot to learn interactively in a natural way, similarly to how a person would learn from another person.” 7
  5. OBJECTIVES OF THE THESIS Multimodal Interaction for Learning Apply Active

    Learning for learning new concepts “Robot-driven” learning Let the robot decie how much training does it need Enable the robot to detect new concepts Different domains: Pose and object recognition 8
  6. RELATED WORK Interaction in Learning. [Rybski et al., 2007] Flexible

    interaction, but describe tasks, not concepts. Active Learning. Mostly in learning skills. [Rosenthal et al., 2009, 2012], [Cakmak and Thomaz, 2012; Cakmak et al., 2010] AL for concept acquisition We study the impact of innacurate user answers 9
  7. RELATED WORK Novelty Detection in robotics Video surveillance [Drews et

    al., 2013] Room Semantics acquisition [Pinto et al., 2011] We use to detect new concepts to learn 10
  8. Poses Objects Poses Objects Novelties Introduction Part I: Interactive Learning

    Part II: Interactive and Active Learning Conclusions
  9. Poses Objects Poses Objects Novelties Introduction Part I: Interactive Learning

    Part II: Interactive and Active Learning Conclusions
  10. SYSTEM OVERVIEW: TRAINING I'm Turned Right ASR "T-Right" Dataset Machine

    Learning MODEL Kinect What's my pose? You're turned right Pose Classifier TTS Kinect 16
  11. SYSTEM OVERVIEW: EXPLOITATION I'm Turned Right ASR "T-Right" Dataset Machine

    Learning MODEL Kinect What's my pose? You're turned right Pose Classifier TTS Kinect 17
  12. SYSTEM OVERVIEW I'm Turned Right ASR "T-Right" Dataset Machine Learning

    MODEL Kinect What's my pose? You're turned right Pose Classifier TTS Kinect 18
  13. KINECT SKELETON MODEL Head Neck Right Shoulder Right Elbow Right

    Hand Torso Left Shoulder Left Elbow Left Hand Right Hip Right Knee Right Foot Left Hip Left Knee Left Foot 15 Joints 7 Parameters per joint: (x, y, z, qx, qy, qz, qw) 19
  14. GRAMMAR-BASED INTERACTION Two different grammars: Pose definition (up to 9

    poses): “I’m standing up, looking to my right.” “Look, Maggie. I am looking towards my right.” Control of the sessions/phases: “Start recording a pose.” “Stop recording.” “End session.” 20
  15. SCENARIO DESCRIPTION 180 cm 240 cm 180 cm Robot User

    User Area Kinect's Field of View 21
  16. SCENARIO DESCRIPTION 24 Training Users Teaching 9 poses: Turned (L,

    F, R) Looking (L, F, R) Pointing (L, F, R) 22
  17. SCENARIO DESCRIPTION 24 Training Users Teaching 9 poses: Turned (L,

    F, R) Looking (L, F, R) Pointing (L, F, R) 22
  18. SCENARIO DESCRIPTION 24 Training Users Teaching 9 poses: Turned (L,

    F, R) Looking (L, F, R) Pointing (L, F, R) 22
  19. CONCLUSION Robot is able to learn by interacting with the

    user. Grammar-based interaction. It’s powerful, but a bit inflexible: Max. num poses set by grammar programmer 25
  20. PUBLISHED RESULTS 26 This section presents a list of journals

    that have been published during this thesis. The three journals are indexed in the Journal Citation Reports (JCR), being one in the first quartile (Q1) and two in the third quartile (Q3). • V. Gonzalez-Pacheco, A. Sanz, M. Malfaz, and M. a. Salichs, “Novelty Detection for Interactive Pose Recognition by a Social Robot,” Int. J. Adv. Robot. Syst., vol. 12, no. 43, p. 1, 2015. • V. Gonzalez-Pacheco, M. Malfaz, F. Fernandez, and M. A. Salichs, “Teaching hu- man poses interactively to a social robot,” Sensors, vol. 13, no. 9, pp. 12406–12430, 2013. • V. Gonzalez-Pacheco, A. Ramey, F. Alonso-Martin, A. Castro-Gonzalez, and M. A. Salichs, “Maggie: A Social Robot as a Gaming Platform,” Int. J. Soc. Robot., vol. 3, no. 4, pp. 371–381, Sep. 2011. 8.4.2 Conferences and Talks • V. Gonzalez-Pacheco, A. Sanz, M. Malfaz, and M. A. Salichs, “Using novelty detection in HRI: Enabling robots to detect new poses and actively ask for their labels,” in 2014 IEEE-RAS International Conference on Humanoid Robots, 2014, pp. 1110–1115. Studying what and when to ask for Feature Queries,” in Proc of the 8th HRI Pioneers Workshop, 2013, pp. 3–4. • J. Sequeira, P. Lima, A. Saffiotti, V. Gonzalez-Pacheco, and M. A. Salichs, “MOnarCH: Multi-Robot Cognitive Systems Operating in Hospitals,” in Proc of the ICRA 2013 Workshop on Crossing the Reality Gap – From Single to Multi to Many Robot Systems, 2013, p. 1. • A. Valero-Gomez, J. Gonzalez-Gomez, V. Gonzalez-Pacheco, and M. A. Salichs, “Printable creativity in plastic valley UC3M,” in Proceedings of the 2012 IEEE Global Engineering Education Conference (EDUCON), 2012, pp. 1–9. • A. Ramey, V. González-Pacheco, and M. A. Salichs, “Integration of a low-cost RGB-D sensor in a social robot for gesture recognition,” in Proceedings of the 6th international conference on Human-Robot Interaction - HRI ’11, 2011, pp. 229–230. • F. Alonso-Martin, V. Gonzalez-Pacheco, A. Castro-Gonzalez, A. A. Ramey, M. Yébenes, and M. A. Salichs,“Using a social robot as a gaming platform,” in 2nd International Conference on Social Robotics, 2010, pp. 30–39.
  21. Poses Objects Poses Objects Novelties Introduction Part I: Interactive Learning

    Part II: Interactive and Active Learning Conclusions
  22. SYSTEM DESCRIPTION: TRAINING Dataset Kinect ROI Locate Hand Extract 2D

    Features Extract 3D Features Aggregate Point Cloud Kinect Segment ROI around hand Locate Hand Extract 2D Features Extract 3D Features Point Cloud Match 2D Features Match 3D Features ID OBJECT RGB 31
  23. SYSTEM DESCRIPTION: EXPLOITATION Dataset Kinect ROI Locate Hand Extract 2D

    Features Extract 3D Features Aggregate Point Cloud Kinect Segment ROI around hand Locate Hand Extract 2D Features Extract 3D Features Point Cloud Match 2D Features Match 3D Features ID OBJECT RGB 32
  24. SYSTEM DESCRIPTION Dataset Kinect ROI Locate Hand Extract 2D Features

    Extract 3D Features Aggregate Point Cloud Kinect Segment ROI around hand Locate Hand Extract 2D Features Extract 3D Features Point Cloud Match 2D Features Match 3D Features ID OBJECT RGB 33 OCULAR SYSTEM
  25. EXPERIMENT DESCRIPTION Room with mixed natural and artificial light 6

    objects Trained with 1, 5 and 10 views per object. 35
  26. RESULTS: LEARNING CURVE 37 0 0,2 0,4 0,6 0,8 1,0

    1 View 5 Views 10 Views F1 Score
  27. RESULTS COMBINING MATCHERS (10 VIEWS) lator 0.89 0.82 0.85 0.66

    0.82 0. l 0.65 0.79 0.71 0.54 0.67 0. able 4.7: F1-score of the RGB and the PC Matchers (10 views per objec F1 Score Object RGB Point Cloud Combined ball 0.62 0.69 0.72 skull 0.67 0.47 0.69 cup 0.61 0.51 0.77 bottle 0.72 0.63 0.89 mobile 0.76 0.53 0.82 calculator 0.85 0.73 0.83 Total 0.71 0.59 0.79 : F1-score comparison for RGB and Point Cloud Matchers combined (10 w = 0.6). 38
  28. CONCLUSION System working in real time RGB (2D) prediction performed

    better than 3D prediction Combining both matchers improves performance System reaches ~80% F1 Score 39
  29. Poses Objects Poses Objects Novelties Introduction Part I: Interactive Learning

    Part II: Interactive and Active Learning Conclusions
  30. Poses Objects Poses Objects Novelties Introduction Part I: Interactive Learning

    Part II: Interactive and Active Learning Conclusions
  31. MOTIVATION What happens when the robot asks the human and

    he/she provides an innacurate answer? 44
  32. FROM FEATURE QUERIES... Free Speech Queries (FSQ): “Which is the

    most important limb in this pose?” “Which limbs are important in this case?” Yes/No Queries (YNQ): “Is the hand important?” “Should I pay attention to your head when you are pointing?” Rank Queries (RQ): “How important is your hand?” 46
  33. ...TO FEATURE FILTERS Build a Threshold (Th) that is averaged

    from different answers. Limbs below threshold, are filtered out 47 Head Neck Right Shoulder Right Elbow Right Hand Torso Left Shoulder Left Elbow Left Hand Right Hip Right Knee Right Foot Left Hip Left Knee Left Foot
  34. ...TO FEATURE FILTERS Build a Threshold (Th) that is averaged

    from different answers. Limbs below threshold, are filtered out 47 Example: Head Neck Right Shoulder Right Elbow Right Hand Torso Left Shoulder Left Elbow Left Hand Right Hip Right Knee Right Foot Left Hip Left Knee Left Foot
  35. ...TO FEATURE FILTERS Build a Threshold (Th) that is averaged

    from different answers. Limbs below threshold, are filtered out 47 Example: Users selected: {hand, elbow} Head Neck Right Shoulder Right Elbow Right Hand Torso Left Shoulder Left Elbow Left Hand Right Hip Right Knee Right Foot Left Hip Left Knee Left Foot
  36. SCENARIO DESCRIPTION 24 Training Users Teaching 9 poses: Turned (L,

    F, R) Looking (L, F, R) Pointing (L, F, R) 180 cm 240 cm 180 cm Robot User User Area Kinect's Field of View 48
  37. EXTENDED FILTER User-selected limbs + adjacent ones 54 Head Neck

    Right Shoulder Right Elbow Right Hand Torso Left Shoulder Left Elbow Left Hand Right Hip Right Knee Right Foot Left Hip Left Knee Left Foot
  38. EXTENDED FILTER Example: User-selected limbs + adjacent ones 54 Head

    Neck Right Shoulder Right Elbow Right Hand Torso Left Shoulder Left Elbow Left Hand Right Hip Right Knee Right Foot Left Hip Left Knee Left Foot
  39. EXTENDED FILTER Example: User-selected limbs + adjacent ones Users select:

    {head, neck} 54 Head Neck Right Shoulder Right Elbow Right Hand Torso Left Shoulder Left Elbow Left Hand Right Hip Right Knee Right Foot Left Hip Left Knee Left Foot
  40. EXTENDED FILTER Example: User-selected limbs + adjacent ones Users select:

    {head, neck} Extended Filter (EF): {head, neck, Lshoulder, Rshoulder} 54 Head Neck Right Shoulder Right Elbow Right Hand Torso Left Shoulder Left Elbow Left Hand Right Hip Right Knee Right Foot Left Hip Left Knee Left Foot
  41. EXTENDED FILTER RESULTS LOOKING 55 Random Forest SVM F1 Score

    Users Users Passive AL - FSQ AL - RQ AL - EF
  42. EXTENDED FILTER RESULTS POINTING 56 Random Forest SVM F1 Score

    Users Users Passive AL - FSQ AL - RQ AL - EF
  43. CONCLUSION Sometimes, users give innacurate answers E.g. looking experiment Reducing

    model accuracy Extended Filter (EF) mitigates this EF also performs well when user provides accurate answers 57
  44. PUBLISHED RESULTS 58 8.4. List of Publications • V. Gonzalez-Pacheco,

    M. Malfaz, and M. A. Salichs, “Asking rank queries in pose learning,” in Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction - HRI ’14, 2014, pp. 164–165. • V. Gonzalez-Pacheco and M. A. Salichs, “Active Learning for Pose Recognition. Studying what and when to ask for Feature Queries,” in Proc of the 8th HRI Pioneers Workshop, 2013, pp. 3–4. • J. Sequeira, P. Lima, A. Saffiotti, V. Gonzalez-Pacheco, and M. A. Salichs, “MOnarCH: Multi-Robot Cognitive Systems Operating in Hospitals,” in Proc of the ICRA 2013 Workshop on Crossing the Reality Gap – From Single to Multi to Many Robot Systems, 2013, p. 1. • A. Valero-Gomez, J. Gonzalez-Gomez, V. Gonzalez-Pacheco, and M. A. Salichs, “Printable creativity in plastic valley UC3M,” in Proceedings of the 2012 IEEE Global Engineering Education Conference (EDUCON), 2012, pp. 1–9. • Victor Gonzalez-Pacheco, María Malfaz, Álvaro Castro-González, Miguel A. Salichs. How Much should a Social Robot trust the user feedback? Analyzing the impact of Verbal Answers in Active Learning. Int. Journal Social Robotics. 2015 [UNDER REVIEW]
  45. Poses Objects Poses Objects Novelties Introduction Part I: Interactive Learning

    Part II: Interactive and Active Learning Conclusions
  46. OBJECTIVES Let the robot to decide wheter it needs more

    examples (using Active Learning) Increase richness of the interactions compared to previous version No grammar limiting max num of objects to learn 62
  47. SYSTEM OVERVIEW 65 FUSION AL Module OCULAR Multimodal Dialog Manager

    FISSION TTS ASR Predictions Pred. Predictions uncertainty Feedback Config.
  48. MUTIMODAL FUSION Data aggregators (vision and language) Receive Input from

    sensors or other modules Produce higer level information and send it to the Dialogue Manager 66 FUSION AL Module OCULAR Multimodal Dialog Manager FISSION TTS ASR
  49. MUTIMODAL FUSION NLP MODULES Functions: Stemming and Lemmatization Part-of-speech tagging

    Extract object names (nouns) and commands (verbs) 67 FUSION AL Module OCULAR Multimodal Dialog Manager FISSION TTS ASR
  50. MUTIMODAL FUSION NLP EXAMPLE “Would you like to learn a

    new object?” Would/MD you/PRP like/VB to/TO learn/VB a/DT new/JJ obect/NN ?/. => [command: LEARN] “This is a ball” This/DT be/VB a/DT ball/NN =>[object:BALL] 68 FUSION AL Module OCULAR Multimodal Dialog Manager FISSION TTS ASR
  51. DIALOGUE MANAGER Rule-based producion system (Iwaki) Processes high level info

    from Multimodal Fusion Decides what action to perform and tells the Multimodal Fission to execute it Dialogues are pre-coded plans written in XML files 69 FUSION AL Module OCULAR Multimodal Dialog Manager FISSION TTS ASR Iwaki: https://github.com/maxipesfix/iwaki
  52. ACTIVE LEARNING MODULE Gets predictions from OCULAR Module Sends uncertainty

    of prediction to Dialogue Manager (DM) The decision of querying the user or not is made by the DM 70 FUSION AL Module OCULAR Multimodal Dialog Manager FISSION TTS ASR
  53. CONCLUSION Robot can learn after training session has ended Poor

    performance of Point Cloud matcher High divergence between matchers Very verbose when querying Use different strategy for querying 76
  54. Poses Objects Poses Objects Novelties Introduction Part I: Interactive Learning

    Part II: Interactive and Active Learning Conclusions
  55. MOTIVATION 80 What happens when you expose the robot with

    new concepts? Is it able to detect that it needs more training?
  56. Pointing Right Pointing Forward Pointing Left (pointing), where some users

    used their right hand to point, while others, thei some cases, some users used their right hand to point to their right and their fro hand when pointing to their left (see Figure 7). In fact, we also observed som looked to the direction where they were pointing, while others looked to the ro Figure 7. Examples of how different users pointed during the tra Right Front Left MOTIVATION 80 What happens when you expose the robot with new concepts? Is it able to detect that it needs more training?
  57. Novelty Score Extreme Value Theorem (EVT) In this example, the

    score of the new data entry is one time units aw value, µ of the normal scores. A value of one means that, a new normal if lays within the 68% of the closests scores to the mean of standard score is higher than 1, the score z ( o1 ) can be considered h 68% of the normal entries, and it can be classified as strange. Imagine we now use a threshold + K and K , instead of +1 and -1 novelty.score ( o1 ) = abs (z ( o1 ) µ) K abs (z ( o1 ) µ) If we develop this expression: 85
  58. K curiosity factor z ( o1 )  µ (3.5)

    the score of the new data entry is one time units away from the average normal scores. A value of one means that, a new entry is labeled as ithin the 68% of the closests scores to the mean of the dataset. If the s higher than 1, the score z ( o1 ) can be considered high with respect to al entries, and it can be classified as strange. use a threshold + K and K , instead of +1 and -1. novelty.score ( o1 ) = abs (z ( o1 ) µ) (3.6) K abs (z ( o1 ) µ) (3.7) Chapter 3. Description of the proposed solution 1 abs (z ( o1 ) µ K ⇥ ) 86
  59. 87 YES NO noise YES NO strange? interesting but known

    by model interesting novelty learn pose interesting? get new pose SYSTEM DIAGRAM
  60. FILTERED JOINTS Sensors 2013, 13 Figure 3. OpenNI’s kinematic model

    of the human body—OpenNI (NI stands fo Interaction) algorithms are able to create and track a kinematic model of the human Head Neck Right Shoulder Right Elbow Right Hand Torso Left Shoulder Left Elbow Left Hand Right Hip Right Knee Right Foot Left Hip Left Knee Left Foot This model contains the data that is going to be used in our learning system. The data o instance ( S ) is composed of 15 joints represented as: S = ( t, u, J ) where t is the time-stamp of the data frame, u is the user identification (Here, the user iden to the user being identified by the openNI framework. It is a value between one and fou 9 Joints 7 Parameters per joint: (x, y, z, qx , qy , qz , qw) 89
  61. LET’S SEE AN EXAMPLE 90 0 2 4 6 8

    10 KM LSA SVM1C GMM noise score 0.06 0.04 0.02 0.00 0.02 0.04 0.06 KM LSA SVM1C GMM strangeness score Observed poses (previous sessions) Current observed pose Observed poses (during session)
  62. LET’S SEE AN EXAMPLE 91 0.0 0.5 1.0 1.5 2.0

    2.5 3.0 3.5 4.0 4.5 KM LSA SVM1C GMM noise score 0.06 0.04 0.02 0.00 0.02 0.04 0.06 KM LSA SVM1C GMM strangeness score Observed poses (previous sessions) Current observed pose Observed poses (during session)
  63. LET’S SEE AN EXAMPLE 92 0.00 0.05 0.10 0.15 0.20

    0.25 0.30 0.35 KM LSA SVM1C GMM noise score 0 1 2 3 4 5 6 KM LSA SVM1C GMM strangeness score Observed poses (previous sessions) Current observed pose Observed poses (during session)
  64. NOISE FILTER Noise score Users showing same pose 93 Noise

    score Deciding when new instances start to become interesting by forming clusters. YES NO noise YES NO strange? interesting but known by model interesting novelty learn pose interesting? get new pose
  65. STRANGENESS EVALUATION Training Users 0 0,2 0,4 0,6 0,8 1

    5 10 20 GMM One Class SVM LSA K-means 94 Detecting “novelties” F1-Score YES NO noise YES NO strange? interesting but known by model interesting novelty learn pose interesting? get new pose
  66. Proof of concept Curiosity factor still needs to be calculated

    and studied experimentally Study our approach in other applications CONCLUSION 95
  67. PUBLISHED RESULTS 96 110 8.4 List of Publications 8.4.1 Journals

    This section presents a list of journals that have been published during this thesis. The three journals are indexed in the Journal Citation Reports (JCR), being one in the first quartile (Q1) and two in the third quartile (Q3). • V. Gonzalez-Pacheco, A. Sanz, M. Malfaz, and M. a. Salichs, “Novelty Detection for Interactive Pose Recognition by a Social Robot,” Int. J. Adv. Robot. Syst., vol. 12, no. 43, p. 1, 2015. • V. Gonzalez-Pacheco, M. Malfaz, F. Fernandez, and M. A. Salichs, “Teaching hu- man poses interactively to a social robot,” Sensors, vol. 13, no. 9, pp. 12406–12430, 2013. • V. Gonzalez-Pacheco, A. Ramey, F. Alonso-Martin, A. Castro-Gonzalez, and M. A. Salichs, “Maggie: A Social Robot as a Gaming Platform,” Int. J. Soc. Robot., vol. 3, no. 4, pp. 371–381, Sep. 2011. 8.4.2 Conferences and Talks • V. Gonzalez-Pacheco, A. Sanz, M. Malfaz, and M. A. Salichs, “Using novelty 12, no. 43, p. 1, 2015. • V. Gonzalez-Pacheco, M. Malfaz, F. Fernandez, and M. A. Salichs, “Teaching hu- man poses interactively to a social robot,” Sensors, vol. 13, no. 9, pp. 12406–12430, 2013. • V. Gonzalez-Pacheco, A. Ramey, F. Alonso-Martin, A. Castro-Gonzalez, and M. A. Salichs, “Maggie: A Social Robot as a Gaming Platform,” Int. J. Soc. Robot., vol. 3, no. 4, pp. 371–381, Sep. 2011. 8.4.2 Conferences and Talks • V. Gonzalez-Pacheco, A. Sanz, M. Malfaz, and M. A. Salichs, “Using novelty detection in HRI: Enabling robots to detect new poses and actively ask for their labels,” in 2014 IEEE-RAS International Conference on Humanoid Robots, 2014, pp. 1110–1115. 156
  68. Poses Objects Poses Objects Novelties Introduction Part I: Interactive Learning

    Part II: Interactive and Active Learning Conclusions
  69. CONCLUSIONS “To develop a system that enables a social robot

    to learn interactively in a natural way, similarly to how a person would learn from another person.” 100
  70. KEY CONTRIBUTIONS INTERACTIVE LEARNING Developed a system that enables robots

    to learn while interacting 2 Different approaches Grammar-based Dialogue-based (+ NLP) 101
  71. KEY CONTRIBUTIONS ACTIVE LEARNING Studied how innacurate user’s answers might

    impact learning Extended Filter mitigates this problem Robot uses AL to keep learning after the training session ends by asking questions when its is uncertain Robot can learn new concepts 102
  72. FUTURE WORKS INTERACTION Study how robot learning is perceived by

    humans ACTIVE LEARNING Continuous learning Apply to other domains 103
  73. JOURNALS 8.4 List of Publications 8.4.1 Journals This section presents

    a list of journals that have been published during this thesis. The three journals are indexed in the Journal Citation Reports (JCR), being one in the first quartile (Q1) and two in the third quartile (Q3). • V. Gonzalez-Pacheco, A. Sanz, M. Malfaz, and M. a. Salichs, “Novelty Detection for Interactive Pose Recognition by a Social Robot,” Int. J. Adv. Robot. Syst., vol. 12, no. 43, p. 1, 2015. • V. Gonzalez-Pacheco, M. Malfaz, F. Fernandez, and M. A. Salichs, “Teaching hu- man poses interactively to a social robot,” Sensors, vol. 13, no. 9, pp. 12406–12430, 2013. • V. Gonzalez-Pacheco, A. Ramey, F. Alonso-Martin, A. Castro-Gonzalez, and M. A. Salichs, “Maggie: A Social Robot as a Gaming Platform,” Int. J. Soc. Robot., vol. 3, no. 4, pp. 371–381, Sep. 2011. 8.4.2 Conferences and Talks • V. Gonzalez-Pacheco, A. Sanz, M. Malfaz, and M. A. Salichs, “Using novelty 105
  74. CONFERENCES (I) • V. Gonzalez-Pacheco, A. Ramey, F. Alonso-Martin, A.

    Castro-Gonzalez, and M. A. Salichs, “Maggie: A Social Robot as a Gaming Platform,” Int. J. Soc. Robot., vol. 3, no. 4, pp. 371–381, Sep. 2011. 8.4.2 Conferences and Talks • V. Gonzalez-Pacheco, A. Sanz, M. Malfaz, and M. A. Salichs, “Using novelty detection in HRI: Enabling robots to detect new poses and actively ask for their labels,” in 2014 IEEE-RAS International Conference on Humanoid Robots, 2014, pp. 1110–1115. 156 8.4. List of Publications • V. Gonzalez-Pacheco, M. Malfaz, and M. A. Salichs, “Asking rank queries in pose learning,” in Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction - HRI ’14, 2014, pp. 164–165. • V. Gonzalez-Pacheco and M. A. Salichs, “Active Learning for Pose Recognition. Studying what and when to ask for Feature Queries,” in Proc of the 8th HRI Pioneers Workshop, 2013, pp. 3–4. • J. Sequeira, P. Lima, A. Saffiotti, V. Gonzalez-Pacheco, and M. A. Salichs, “MOnarCH: Multi-Robot Cognitive Systems Operating in Hospitals,” in Proc of the 106
  75. CONFERENCES (II) pose learning,” in Proceedings of the 2014 ACM/IEEE

    international conference on Human-robot interaction - HRI ’14, 2014, pp. 164–165. • V. Gonzalez-Pacheco and M. A. Salichs, “Active Learning for Pose Recognition. Studying what and when to ask for Feature Queries,” in Proc of the 8th HRI Pioneers Workshop, 2013, pp. 3–4. • J. Sequeira, P. Lima, A. Saffiotti, V. Gonzalez-Pacheco, and M. A. Salichs, “MOnarCH: Multi-Robot Cognitive Systems Operating in Hospitals,” in Proc of the ICRA 2013 Workshop on Crossing the Reality Gap – From Single to Multi to Many Robot Systems, 2013, p. 1. • A. Valero-Gomez, J. Gonzalez-Gomez, V. Gonzalez-Pacheco, and M. A. Salichs, “Printable creativity in plastic valley UC3M,” in Proceedings of the 2012 IEEE Global Engineering Education Conference (EDUCON), 2012, pp. 1–9. • A. Ramey, V. González-Pacheco, and M. A. Salichs, “Integration of a low-cost RGB-D sensor in a social robot for gesture recognition,” in Proceedings of the 6th international conference on Human-Robot Interaction - HRI ’11, 2011, pp. 229–230. • F. Alonso-Martin, V. Gonzalez-Pacheco, A. Castro-Gonzalez, A. A. Ramey, M. Yébenes, and M. A. Salichs,“Using a social robot as a gaming platform,” in 2nd International Conference on Social Robotics, 2010, pp. 30–39. 107
  76. ONGOING PUBLICATIONS 108 • Victor Gonzalez-Pacheco, María Malfaz, Álvaro Castro-González,

    Miguel A. Salichs. How Much should a Social Robot trust the user feedback? Analyzing the impact of Verbal Answers in Active Learning. Int. Journal Social Robotics. 2015 [UNDER REVIEW] • Victor Gonzalez Pacheco, Maria Malfaz, Miguel A. Salichs. Active Learning for in-hand Object Recognition. Sensors. 2016. [IN PREPARATION]
  77. INTERACTIVE AND ACTIVE LEARNING FOR SOCIAL ROBOTICS Víctor González Pacheco

    Advisers: Miguel Ángel Salichs María Malfaz Leganés, November 2015