Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Mechanisms for Opponent Modelling

Imperial ACM
November 22, 2013

Mechanisms for Opponent Modelling

In our everyday lives we engage in numerous dialogues as we socialise with people around us. Most times we attempt to persuade others to accept our point of view, which requires that we strategize against them so as to increase our chances of winning. However, as we are unaware of their knowledge we can only rely on our assumptions on what they might know, which are usually based on our previous dialogues with them, i.e. on the arguments they used in those dialogues. So for every individual we come against we can construct a model of their knowledge consisting of all the arguments they ever used against us, and rely on it for strategizing.

But wait a second! Is this all we can do? Could we possibly infer more things about what one might know given what we assume they know? And if yes, how can we do that? In this presentation we explore this possibility through intuitive everyday examples, which appeal to intuition and make conveying the ideas behind our research easy and fun. Need more reasons to attend this talk? How about never losing an argument again?

Imperial ACM

November 22, 2013
Tweet

More Decks by Imperial ACM

Other Decks in Research

Transcript

  1. MECHANISMS for OPPONENT MODELLING Imperial College Seminar Christos Hadjinikolis Supervisors:

    Dr. S. Modgil, Dr. E. Black, Prof. P. McBurney 11/25/2013 Department of Informatics King's College London
  2.  I am a senior PhD student in the dept.

    of Informatics at King’s College London  I am a member of the Agents & Intelligent Systems group  Supervised by:  Dr. Sanjay Modgil  Dr. Elizabeth Black  Prof. Peter McBurney Introduction 11/25/2013 Department of Informatics King's College London
  3.  I was born on a cold autumn day in

    November 26th, of 1984! More about me! 11/25/2013 Department of Informatics King's College London
  4.  Introduction  Background  Problem Description  Contribution 

    Proposed methodology  An example  Complexity  Monte-Carlo Simulation  Experimental Results Presentation overview 11/25/2013 Department of Informatics King's College London
  5.  Our work deals with the notion of strategising in

    argument based dialogue systems.  Such systems formalise how participants exchange locutions in dialogues with respect to a dialogical objective.  In such systems, dialogues are perceived as games, where at any given stage, the dialogue’s protocol determines a set of possible moves that an agent can play in reply to a move of its interlocutor.  The strategy problem concerns choosing a move out of that set, so as to maximise a participant’s chances of satisfying its self- interested objectives. 11/25/2013 Department of Informatics King's College London Background General Introduction
  6.  An abstraction of non-monotonic logics where:  Arguments for

    and against a claim are produced and evaluated so as to test the acceptability of that claim, under a given semantics  A logical system is converted to an argumentation one, expressed as an argumentation framework AF:<A,C> 11/25/2013 Department of Informatics King's College London Background Argumentation systems p, p=>q s, s=>¬q A B
  7. 11/25/2013 Department of Informatics King's College London Background Opponent Modeling

    & Strategising An agent’s own KB Its opponent’s KB A B C A B C D E E D
  8. 11/25/2013 Department of Informatics King's College London Problem Description How

    to build an Opponent Model An agent’s own KB Its opponent’s KB A B C D A B C D
  9. 11/25/2013 Department of Informatics King's College London Problem Description How

    to build an Opponent Model • E. P. Yuqing Tang, Kai Cai and Simon Parsons. “A system of argumentation for reasoning about trust”. In Proceedings of the 8th European Workshop on Multi- Agent Systems, 2010.
  10. 11/25/2013 Department of Informatics King's College London Intuition A B

    C D A dialogue between a blue agent and a red one
  11. 11/25/2013 Department of Informatics King's College London Intuition A B

    C A dialogue between a blue agent and a green one ???
  12. Proposed methodology 11/25/2013 Department of Informatics King's College London Building

    a relationship graph • We rely on this hypothesis in order to create a mapping of a set of arguments with respect to a relationship factor (a relationship graph (RG)), based on the accumulated experience collected from engaging in numerous dialogues with different opponents • Use this mapping in order to augment an existing opponent model (OM) through adding to it arguments that have a high likelihood to also be known to that opponent, based on their relevance relationship with arguments already in the OM.
  13. 11/25/2013 Department of Informatics King's College London Intuition Assume that

    two agents Ag1 and Ag2 engage in a dialogue in order to decide where to have an enjoyable dinner: – Ag1:(X) We should go to the “Massala” Indian restaurant. – Ag2:(Y) Why there? – Ag1:(N) Because I read in today’s newspaper that it was proposed by a famous chef. – Ag2:(Z) Is the chef’s opinion trustworthy? – Ag1:(Q) Yes, I heard that he won the national “best chef award” this year. – Ag2:(J) Indian food is too oily though and thus not healthy. – Ag1:(S) It’s healthy, as it’s made of natural foods and fats. X N Y Z Q J S
  14. 11/25/2013 Department of Informatics King's College London Intuition Assume that

    two agents Ag1 and Ag2 engage in a dialogue in order to decide where to have an enjoyable dinner: X N Y Z Q J S • Assume that Ag2 enters another dialogue with an agent Ag3 on the same topic. • Assume that at some point Ag3 cites the newspaper article (N) as Ag1 did. • It is then reasonable for Ag2 to expect that Ag3 is likely to also be aware of the chef’s qualifications (Q).
  15. 11/25/2013 Department of Informatics King's College London Intuition X N

    Y Z Q J S • This implies that: • consecutive arguments in a dialogue have some kind of a relationship. • In this case, arguments (N) and (Q) appear to be related • Awareness of the first implies a likely awareness of the second. • They support each other!!!
  16. 11/25/2013 Department of Informatics King's College London Intuition X N

    Y Z Q J S • How about (N) and (S) ??? • Well, they address different topics in the dialogue: • N and Q appear in a particular branch of a dialogue tree instantiated by Ag2’s question (Y), while S was asserted by Ag1 in an attempt to respond to Ag2’s alternative reply J, to X. • We will assume that our hypothesis applies only for arguments asserted in the same branch of a dialogue tree.
  17. Proposed methodology 11/25/2013 Department of Informatics King's College London Building

    a relevance graph • We assume an RG to be incrementally built as an agent engagesing numerous dialogues, being empty at the beginning, and constantly updated with newly encountered opponent arguments. • Condition: Connected arguments must be in the same path of a dialogue tree and no more than a n levels distance from each other (w.r.t. opponent arguments alone) A B C E D F G LEVEL 0 LEVEL 1 LEVEL 2 LEVEL 3
  18. Proposed methodology 11/25/2013 Department of Informatics King's College London Building

    a relevance graph A B C E D F G LEVEL 0 LEVEL 1 LEVEL 2 LEVEL 3 H LEVEL 4 LEVEL 5 • For n=1 • For n =2 • This modeling approach simply reflects the implied relationship that consecutive opponent arguments have in a single branch of a tree. • Through modifying the n value one can strengthen or weaken the connectivity, and so the relationship, between arguments in the induced RG.
  19. Proposed methodology 11/25/2013 Department of Informatics King's College London The

    augmentation B F I D B I Opponent Model={B,I} Possible augmentations: • = , • = , , • = , , • = , , , Basic Probability Laws • + + + = 1
  20. Complexity 11/25/2013 Department of Informatics King's College London • The

    complexity is exponential!!! • 2 • For example, for the RG on the right it would be 22 = 4 B F I D B I
  21. Monte-Carlo Simulation 11/25/2013 Department of Informatics King's College London •

    Running simulations many times over in order to calculate those same probabilities heuristically • Just like actually playing and recording your results in a real casino situation. • Hence the name!
  22. Monte-Carlo Simulation 11/25/2013 Department of Informatics King's College London •

    Assume you want to experimentally compute this probability. • What would you do? • Throw the die for an adequate number of times. • Record the results. • Compute the experimental probability.
  23. Monte-Carlo Simulation 11/25/2013 Department of Informatics King's College London •

    Evaluate your approach: • What is the error between the experimental and the actual probability? • Is that acceptable? • What is an adequate number of times for repeating the experiment?
  24. Monte-Carlo Simulation 11/25/2013 Department of Informatics King's College London •

    We did just that: • Developed an algorithm that randomly traverses the relationships graph • Start point: Yellow nodes (nodes already in the opponent model) • End point: Nodes that are one-hop neighbours • Recorded the results and calculated the experimental probabilities B F I D B I
  25. Experimental Results 11/25/2013 Department of Informatics King's College London Error

    per argument likelihood over n samples • We did pretty good! • In just 100 samples the error levels were diminished to a number less than 0.1.
  26. Publication 11/25/2013 Department of Informatics King's College London 2013 Christos

    Hadjinikolis, Yiannis Siantos, Sanjay Modgil, Elizabeth Black, Peter McBurney. Opponent Modelling in Persuasion Dialogues, In: F. Rossi (Editor): Proceeedings of the 23rd International Joint Conference on Artificial Intelligence (IJCAI 2013), August 2013, Beijing, China. Best poster award!
  27. An example 11/25/2013 Department of Informatics King's College London Possible

    augmentation B F I D B I 1. = − 2. = + − ∩ 3. = => = ( + − ∗ ) − F