Slide 1

Slide 1 text

Adversarial   Search Vimal Atreya Ramaka Maher Alhadlag Kirubanidhy Jambugeswaran

Slide 2

Slide 2 text

Adversarial Search •  Competitive multi-agent environments •  Agents’ goals are in conflict •  Rises adversarial search problems •  Two agents act alternatively •  For example, if one player wins a game of chess, the other player necessarily loses

Slide 3

Slide 3 text

Games •  Too hard to solve. •  For example, chess has an average branching factor of about 35, and games often go to 50 moves by each player (100 moves in total for two players), so the search tree has about 35100 nodes. •  Games, like the real world problems, therefore require the ability to make some decision even when calculating the optimal decision is infeasible.

Slide 4

Slide 4 text

Relation  to  Search •  Search – no adversary o  Solution is (heuristic) method for finding goal o  Heuristics can help find optimal solution o  Evaluation function: estimate of cost from start to goal through given node o  Examples: path planning, scheduling activities •  Games – adversary o  Solution is strategy (strategy specifies move for every possible opponent reply) o  Time limits force an approximate solution o  Evaluation function: evaluate “goodness” of game position o  Examples: chess, checkers, tic-tac-toe, backgammon

Slide 5

Slide 5 text

Game  setup •  We consider games with two players: MAX and MIN •  MAX moves first, then they take turns until the game is over •  At the end of the game, winner gets award, looser gets penalty

Slide 6

Slide 6 text

Defining  a  game  as  a   search  problem •  S0: The initial state, which specifies how the game is set up at the start. •  PLAYER(s): Defines which player has the move in a state. •  ACTION(s): Returns the set of legal moves in a state. •  RESULT(s,a): The transition model, which defines the result of a move. •  TERMINAL-TEST(s): A terminal test, which is true when the game is over and false otherwise. o  States where the game has ended are called terminal states. •  UTILITY (s, p): A utility function (also called an objective function or payoff function), defines the final numeric value for a game that ends in terminal state s for a player p. o  In chess, the outcome is a win, loss, or draw, with values +1, 0, or ,1/2 o  Some games have wider variety of possible outcomes

Slide 7

Slide 7 text

Game  tree •  The initial state, ACTION, and RESULT functions define the game tree, where the nodes are game states and the edges are moves. •  Example Game: tic-tac-toe ( X O ). o  From the initial state, MAX has 9 possible moves. o  Play alternates between MAX 's placing an X and MIN's placing an O o  Later we reach leaf nodes corresponding to terminal states such that one player has three in a vertical / horizontal / diagonal row or all the squares are filled.

Slide 8

Slide 8 text

Partial  Game  Tree  for  Tic-­‐‑ Tac-­‐‑Toe

Slide 9

Slide 9 text

Search  tree •  For tic-tac-toe the game tree is relatively small – fewer than 9! = 362, 880 terminal nodes. •  But for chess, there are over 1040 nodes, so the game tree is best thought of as a theoretical construct that we cannot realize in the physical world. •  But regardless of the size of the game tree, it is MAX's job to search for a good move. •  We use the term search tree for a tree that is superimposed on the full game tree, and examines enough nodes to allow a player to determine what move to make.

Slide 10

Slide 10 text

Optimal  strategies •  In games, MAX needs to find a contingent strategy assuming an infallible MIN opponent. •  Namely, we assume that both players play optimally !! •  Given a game tree, the optimal strategy for MAX can be determined by using the minimax value of each node: MINIMAX-VALUE(s)= UTILITY(s) If TERMINAL-TEST(s) maxa ∈ Action(s) MINIMAX(RESULTS(s,a)) If PLAYER(s)=MAX mina ∈ Action(s) MINIMAX(RESULTS(s,a)) If PLAYER(s)=MIN

Slide 11

Slide 11 text

Two-­‐‑Ply  Game  Tree Utility Values One level of actions in a game tree is called a ply in AI. Minimax maximizes the worst-case outcome for max.

Slide 12

Slide 12 text

Minimax  Algorithm function MINIMAX-DECISION(state) returns an action inputs: state, current state in game V⇓MAX-VALUE(state) return the action in ACTIONS(state) with value v function MAX-VALUE(state) returns a utility value if TERMINAL-TEST(state) then return UTILITY(state) v ß ∞ for each a in ACTIONS(state) do v ß MAX(v, MIN-VALUE(RESULT(s,a))) return v function MIN-VALUE(state) returns a utility value if TERMINAL-TEST(state) then return UTILITY(state) v ß ∞ for each a in ACTIONS(state) do v ⇓ MIN(v, MAX-VALUE(RESULT(s,a))) return v

Slide 13

Slide 13 text

Properties  of  Minimax •  The minimax algorithm performs a complete depth- first traversal of the game tree. •  Time: O(bm) L •  Space: O(bm) J b: branching factor m: max depth

Slide 14

Slide 14 text

Problem  of  minimax   search •  The number of nodes that minimax search has to examine is exponential to the depth of the tree •  Solution: Do not examine every node •  Alpha-beta pruning: o  Alpha = value of best choice (i.e highest-value) found so far at any choice point along the MAX path o  Beta = value of best choice (i.e lowest-value) found so far at any choice point along the MIN path

Slide 15

Slide 15 text

Alpha-­‐‑Beta  Algorithm   function ALPHA-BETA-SEARCH(state) returns an action inputs: state, current state in game V⇓MAX-VALUE(state, -∞, +∞) return the action in ACTIONS(state) with value v function MAX-VALUE(state, α, β) returns a utility value if TERMINAL-TEST(state) then return UTILITY(state) v ß ∞ for each a in ACTIONS(state) do v ß MAX(v, MIN-VALUE(RESULT(s,a), α, β)) if v >= β then return v α ß MAX(α, v) return v function MIN-VALUE(state, , α, β) returns a utility value if TERMINAL-TEST(state) then return UTILITY(state) v ß+ ∞ for each a in ACTIONS(state) do v ß MIN(v, MAX-VALUE(RESULT(s,a), α, β)) if v <= α then return v β ß MIN(β, v) return v

Slide 16

Slide 16 text

DEMO

Slide 17

Slide 17 text

Thank  You