Slide 1

Slide 1 text

Rethinking the Intelligent Agent Perceive-Reason-Act Loop Michael Papasimeon Intelligent Agent Lab 30 October 2002 Michael Papasimeon Intelligent Agents 30 October 2002 1 / 22

Slide 2

Slide 2 text

Agent-Environment Interaction Key issues with current approaches to agent-environment interaction: Treat the agent and the environment as separate entities. Communication via inputs and outputs. Agent-Environment designs do not follow claims about: Agents being situated. The environment being important. Michael Papasimeon Intelligent Agents 30 October 2002 2 / 22

Slide 3

Slide 3 text

Agent-Environment Interaction Loop AGENT ENVIRONMENT ACTION OUTPUT SENSOR INPUT Michael Papasimeon Intelligent Agents 30 October 2002 3 / 22

Slide 4

Slide 4 text

Agent Control Loop... Pythonic Version of Wooldridge’s Agent Control Loop while True: observe_the_world() update_internal_world_model() deliberate_about_which_intention_to_acheive() use_means_end_reasoning_to_find_a_plan() execute_the_plan() Michael Papasimeon Intelligent Agents 30 October 2002 4 / 22

Slide 5

Slide 5 text

Or the BDI Control Loop... Adapted from Wooldridge... procedure BDI(B0, I0) B ← B0 I ← I0 while True do ρ ← get next percept(); B ← brf(B, ρ); D ← options(B, I); I ← filter(B, D, I); π ← plan(B, I); execute(π); end while end procedure Michael Papasimeon Intelligent Agents 30 October 2002 5 / 22

Slide 6

Slide 6 text

Let’s dig deeper... Begin to look at the agent control loop and the interaction with the environment in more detail. The interaction between agent and environment needs to be broken down into components, step by step. Start looking at how inputs/outputs are generated... i.e. look at sensors and actuators. Michael Papasimeon Intelligent Agents 30 October 2002 6 / 22

Slide 7

Slide 7 text

A level down... Michael Papasimeon Intelligent Agents 30 October 2002 7 / 22

Slide 8

Slide 8 text

Labels in the Environment One of the things that is sent to an agent’s sensors is the possibility of pre-labeled entities in the environment. Michael Papasimeon Intelligent Agents 30 October 2002 8 / 22

Slide 9

Slide 9 text

We can begin to formulate a theory... In a multi-agent system we have n agents, A1...An. Each agent has m sensors. We can specify the i-th agent’s j-th sensor as Sij Michael Papasimeon Intelligent Agents 30 October 2002 9 / 22

Slide 10

Slide 10 text

Agent Mental States Each agent Ai can be in a single mental state mi . The mental state may be the agent’s beliefs and intentions. mi = {Bi, Ii} Consider the sensing of the environment to be a function of the agent’s current mental state. Michael Papasimeon Intelligent Agents 30 October 2002 10 / 22

Slide 11

Slide 11 text

Agent Mental State in the Loop... AGENT ENVIRONMENT R AW SENSO R INPUTS ACTIO NS AGENT’S SENSORS AGENT’S ACTUATORS COM M ANDS PE RCE PTIO NS AGENT MENTAL STATE Michael Papasimeon Intelligent Agents 30 October 2002 11 / 22

Slide 12

Slide 12 text

Perception and Mental State Implies perception/sensing are a function of an agent’s mental state. What you perceive as an agent depends on what you are doing and what you believe you are doing (beliefs, intentions). This fits in with J.J. Gibsons ideas of direct perception for ecological psychology. Sensor(σi, e, mi) := σi+1 Michael Papasimeon Intelligent Agents 30 October 2002 12 / 22

Slide 13

Slide 13 text

The Agent-Environment Loop Revisited Michael Papasimeon Intelligent Agents 30 October 2002 13 / 22

Slide 14

Slide 14 text

Intention Based Feedback Loop Michael Papasimeon Intelligent Agents 30 October 2002 14 / 22

Slide 15

Slide 15 text

Environmental Representation Still need to look at environmental representation options Flat Hierarchical/Relational Labels (Dynamic or Pre-Processed) Intention Oriented Affordances Dynamic agent (tailored to the agent) Static environment Michael Papasimeon Intelligent Agents 30 October 2002 15 / 22

Slide 16

Slide 16 text

So what is the goal then? To create a truly situated agent. Affordances: opportunity for action Together with a tighter agent-environment feedback loop; might just do the trick. Michael Papasimeon Intelligent Agents 30 October 2002 16 / 22

Slide 17

Slide 17 text

Affordances Affordances are a function: The subset of the environment that the agent is/can perceive using its sensors. The agent’s mental state. The agent’s current activity... Do we need to distinguish between Intention/Activity/Action? Michael Papasimeon Intelligent Agents 30 October 2002 17 / 22

Slide 18

Slide 18 text

Example: Jumping a Creek My intention is to get to Town B from Town A I have a plan to run from A to B I have a plan to walk from A to B I see a creek If I am running the creek affords jumping Here the affordance is a function of the activity rather than intention. Michael Papasimeon Intelligent Agents 30 October 2002 18 / 22

Slide 19

Slide 19 text

How do we build such an agent? Agent announces to the environment what it can see. Agent announces to th environment what it is doing (activity or action) or maybe even intention. Environment/Affordance engine somehow binds what a I can see with what I am doing, generating affordances for the things in the environment. Michael Papasimeon Intelligent Agents 30 October 2002 19 / 22

Slide 20

Slide 20 text

Issues (1) How does the agent sense/perceive the affordances? Is there an affordance sensor? Does the agent get affordance percepts (direct percepts) in addition to regular percepts? How does the agent then use these affordances in the next deliberation step? Michael Papasimeon Intelligent Agents 30 October 2002 20 / 22

Slide 21

Slide 21 text

Issues (2) What do affordances look like? Names, labels, relations? can-jump(creek) → What are these? How does having these affordances affect your intention generation process? Need more examples... Michael Papasimeon Intelligent Agents 30 October 2002 21 / 22

Slide 22

Slide 22 text

Example Sensors Observe Environment Environment Returns Labelled Entities That Can Be Sensed The Agent Informs The Environment About What it Can See and It’s Mental State The Environment Returns Affordances Relating to What the Agent Can See The Affordances are used by the agent to choose next intention Action Michael Papasimeon Intelligent Agents 30 October 2002 22 / 22