Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Rethinking the Intelligent Agent Perceive-Reason-Act Loop

Rethinking the Intelligent Agent Perceive-Reason-Act Loop

Slides from October 2002 that describe the early and preliminary formulation of the ideas that formed a substantial part of my PhD research.

The slides describe the need to rethink the agent reasoning loop, describe how you might incorporate feedback loops into the process, making environmental perception dependant on mental state, the incorporation of the concept of direct perception and affordances.

Michael Papasimeon

October 30, 2002
Tweet

More Decks by Michael Papasimeon

Other Decks in Research

Transcript

  1. Rethinking the Intelligent Agent Perceive-Reason-Act Loop Michael Papasimeon Intelligent Agent

    Lab 30 October 2002 Michael Papasimeon Intelligent Agents 30 October 2002 1 / 22
  2. Agent-Environment Interaction Key issues with current approaches to agent-environment interaction:

    Treat the agent and the environment as separate entities. Communication via inputs and outputs. Agent-Environment designs do not follow claims about: Agents being situated. The environment being important. Michael Papasimeon Intelligent Agents 30 October 2002 2 / 22
  3. Agent Control Loop... Pythonic Version of Wooldridge’s Agent Control Loop

    while True: observe_the_world() update_internal_world_model() deliberate_about_which_intention_to_acheive() use_means_end_reasoning_to_find_a_plan() execute_the_plan() Michael Papasimeon Intelligent Agents 30 October 2002 4 / 22
  4. Or the BDI Control Loop... Adapted from Wooldridge... procedure BDI(B0,

    I0) B ← B0 I ← I0 while True do ρ ← get next percept(); B ← brf(B, ρ); D ← options(B, I); I ← filter(B, D, I); π ← plan(B, I); execute(π); end while end procedure Michael Papasimeon Intelligent Agents 30 October 2002 5 / 22
  5. Let’s dig deeper... Begin to look at the agent control

    loop and the interaction with the environment in more detail. The interaction between agent and environment needs to be broken down into components, step by step. Start looking at how inputs/outputs are generated... i.e. look at sensors and actuators. Michael Papasimeon Intelligent Agents 30 October 2002 6 / 22
  6. Labels in the Environment One of the things that is

    sent to an agent’s sensors is the possibility of pre-labeled entities in the environment. Michael Papasimeon Intelligent Agents 30 October 2002 8 / 22
  7. We can begin to formulate a theory... In a multi-agent

    system we have n agents, A1...An. Each agent has m sensors. We can specify the i-th agent’s j-th sensor as Sij Michael Papasimeon Intelligent Agents 30 October 2002 9 / 22
  8. Agent Mental States Each agent Ai can be in a

    single mental state mi . The mental state may be the agent’s beliefs and intentions. mi = {Bi, Ii} Consider the sensing of the environment to be a function of the agent’s current mental state. Michael Papasimeon Intelligent Agents 30 October 2002 10 / 22
  9. Agent Mental State in the Loop... AGENT ENVIRONMENT R AW

    SENSO R INPUTS ACTIO NS AGENT’S SENSORS AGENT’S ACTUATORS COM M ANDS PE RCE PTIO NS AGENT MENTAL STATE Michael Papasimeon Intelligent Agents 30 October 2002 11 / 22
  10. Perception and Mental State Implies perception/sensing are a function of

    an agent’s mental state. What you perceive as an agent depends on what you are doing and what you believe you are doing (beliefs, intentions). This fits in with J.J. Gibsons ideas of direct perception for ecological psychology. Sensor(σi, e, mi) := σi+1 Michael Papasimeon Intelligent Agents 30 October 2002 12 / 22
  11. Environmental Representation Still need to look at environmental representation options

    Flat Hierarchical/Relational Labels (Dynamic or Pre-Processed) Intention Oriented Affordances Dynamic agent (tailored to the agent) Static environment Michael Papasimeon Intelligent Agents 30 October 2002 15 / 22
  12. So what is the goal then? To create a truly

    situated agent. Affordances: opportunity for action Together with a tighter agent-environment feedback loop; might just do the trick. Michael Papasimeon Intelligent Agents 30 October 2002 16 / 22
  13. Affordances Affordances are a function: The subset of the environment

    that the agent is/can perceive using its sensors. The agent’s mental state. The agent’s current activity... Do we need to distinguish between Intention/Activity/Action? Michael Papasimeon Intelligent Agents 30 October 2002 17 / 22
  14. Example: Jumping a Creek My intention is to get to

    Town B from Town A I have a plan to run from A to B I have a plan to walk from A to B I see a creek If I am running the creek affords jumping Here the affordance is a function of the activity rather than intention. Michael Papasimeon Intelligent Agents 30 October 2002 18 / 22
  15. How do we build such an agent? Agent announces to

    the environment what it can see. Agent announces to th environment what it is doing (activity or action) or maybe even intention. Environment/Affordance engine somehow binds what a I can see with what I am doing, generating affordances for the things in the environment. Michael Papasimeon Intelligent Agents 30 October 2002 19 / 22
  16. Issues (1) How does the agent sense/perceive the affordances? Is

    there an affordance sensor? Does the agent get affordance percepts (direct percepts) in addition to regular percepts? How does the agent then use these affordances in the next deliberation step? Michael Papasimeon Intelligent Agents 30 October 2002 20 / 22
  17. Issues (2) What do affordances look like? Names, labels, relations?

    can-jump(creek) → What are these? How does having these affordances affect your intention generation process? Need more examples... Michael Papasimeon Intelligent Agents 30 October 2002 21 / 22
  18. Example Sensors Observe Environment Environment Returns Labelled Entities That Can

    Be Sensed The Agent Informs The Environment About What it Can See and It’s Mental State The Environment Returns Affordances Relating to What the Agent Can See The Affordances are used by the agent to choose next intention Action Michael Papasimeon Intelligent Agents 30 October 2002 22 / 22