How to think like a system: Puzzles, Problems, and Programs

B6ff5f798c18a3367b2770aa3ada0730?s=47 Chris
September 28, 2018

How to think like a system: Puzzles, Problems, and Programs

Strange Loop 2018 talk.

B6ff5f798c18a3367b2770aa3ada0730?s=128

Chris

September 28, 2018
Tweet

Transcript

  1. How to think like a system: Puzzles, Problems, Programs Chris

    Martens North Carolina State University @ Strange Loop 2018 1
  2. Bongard Problems 2

  3. Bongard Problems 3 Triangles vs. Quadrilaterals

  4. Bongard Problems 4

  5. Bongard Problems 5 Unfilled shape overlaps filled vs. vice versa

  6. Bongard Problems 6

  7. Bongard Problems 7 Ask a neighbor!

  8. Bongard Problems 8 Harry Foundalis - index and disseration http://www.foundalis.com/res/bps/bpidx.htm

  9. 9 About Me •Assistant Professor, Computer Science @ NC State

    •Director, Principles of Expressive Machines (POEM) Lab •Strange Loop #3 •I <3 functional programming, logic, proofs, etc. •Research: tools & languages for interactive narrative, game AI, procedural generation, system modeling
  10. 10 Lately I’ve been thinking about how things like this

    relate to things like this (Experiential characteristics) (Formal descriptions)
  11. 11 human-system interaction

  12. Normal problems: apply rules to new cases Bongard problems: study

    cases to learn the rule
  13. Using Examples to Learn Rules Things humans do: • Hypothesis-driven

    science • Test-driven development Things computers do: • Example-driven program synthesis • Data-trained classifiers
  14. How should we design the tools (game levels, code editors,

    interfaces, programming languages) necessary for humans and computers to do these things together?
  15. Outline •Mental model alignment theory •Case Studies •Puzzles •Problems •Programs

    •Tools for Thought 15
  16. Outline 16 •Mental model alignment theory •Case Studies •Puzzles •Problems

    •Programs •Tools for Thought
  17. Mental Models 17 Predict Explain 1 - P.N. Johnson-Laird. Mental

    Models. Cambridge University Press, Cambridge, 1983. 1
  18. Mental Model Alignment Theory 18 Press right, Mario goes right

    Press A, Mario jumps YouTube “Extra Credits,” episode “Design Club - Super Mario Bros: Level 1-1 - How Super Mario Mastered Level Design.” https://www.youtube.com/watch?v=ZH2wGpEZVgE
  19. Mental Model Alignment Theory 19 Press right, Mario goes right

    Press A, Mario jumps Collide w/this thing ???
  20. Mental Model Alignment Theory 20 Press right, Mario goes right

    Press A, Mario jumps Collide w/this thing, die
  21. Generalization 21 Press right, Mario goes right Press A, Mario

    jumps Collide w/mushroom shape, die
  22. Application 22 Press right, Mario goes right Press A, Mario

    jumps Collide w/mushroom, die?
  23. Application 23 Press right, Mario goes right Press A, Mario

    jumps Collide w/mushroom, die?
  24. Revision 24 Press right, Mario goes right Press A, Mario

    jumps Collide w/angry mushroom, die Collide w/mushroom, get big
  25. Mental Model Alignment Theory In sum: *Mental models simulate an

    external reality *They make predictions and explanations *They generalize from examples *They revise upon failure
  26. Failing Means Learning 26 From the game Celeste released in

    2018
  27. Outline 27 •Mental model alignment theory •Case Studies •Puzzles •Problems

    •Programs •Tools for Thought
  28. What makes a good puzzle? 28 Mark Brown (YouTube video

    of same title): 0. Clever mechanics
  29. Cosmic Express 29 Draknek, 2017

  30. 0. Clever Mechanics 30 “Iron-clad rules & limitations that never

    change” In Cosmic Express: * Tracks can’t cross over one another * Each passenger must be taken to a matching box * Each train car can hold 1 passenger * Passengers will try to hop into any empty car that passes them
  31. 31 …with interesting, gradually-revealed consequences

  32. What makes a good puzzle? 32 1. Assumption 2. Catch

    3. Revelation Building on a solid foundation of mechanics:
  33. 1. Assumption 33 Each passenger should be delivered to the

    box nearest to it
  34. 2. Catch 34 Drawing routes to pick up passengers block

    access to boxes (and vice versa)
  35. 3. Revelation 35 Visit passengers in a different order from

    the boxes
  36. Research Question 36 Does the assumption-catch-revelation pattern support faster or

    more accurate mental model alignment?
  37. Laserverse 37 A POEM Lab original (Summer 2018)

  38. “Feedback” 38

  39. Conceptual Dependencies 39

  40. “Beamlock” (assumption/catch) 40

  41. “Beamlock” (revelation) 41

  42. Studying Mental Model Formation 42 Predict Explain Play

  43. Research Goal 43 Use what we learn about mental model

    formation to procedurally generate puzzle sequences that support deep mastery
  44. Outline 44 •Mental model alignment theory •Case Studies •Puzzles •Problems

    •Programs •Tools for Thought
  45. A certain class of puzzles? 45 Infinifactory (Zachtronics 2015), Factorio

    (Wube 2012)
  46. (a personal favorite) 46 The Incredible Machine (Dynamix, 1993)

  47. Problem Solving Games 47 Mark Brown, YouTube: “Puzzle Solving… or

    Problem Solving?”: DISCOVER THE SOLUTION INVENT A SOLUTION vs
  48. 48 Redstone in Minecraft (Mojang 2011)

  49. 49 Literally writing code in TIS-100 (Zachtronics 2015)

  50. 50 Literally teaching code in BOTS (Game2Learn lab, 2015)

  51. 51 Teaching functional idioms in Cube Composer (David Peter)

  52. Outline 52 •Mental model alignment theory •Case Studies •Puzzles •Problems

    •Programs •Tools for Thought
  53. Puzzles: small, intentionally-crafted solution space Designed Problems: bigger solution space,

    may be co-designed with problem Programming: big solution space, problem precedes solution 53
  54. Mechanics 54 Programming Model/Language

  55. Mechanics Puzzles 55 Programming Model/Language Programming Problems

  56. Mechanics Puzzles Puzzle Solutions 56 Programming Model/Language Programming Problems Programs

  57. Learning rules from examples 57 Designing abstractions

  58. Learning rules from examples 58 Designing abstractions

  59. How learners develop mental models of the programming model/language: lots

    of prior research 59
  60. Most of us are coding with an incomplete mental model

    of our programming language (see: Gary Bernhardt’s WAT talk) 60 https://www.destroyallsoftware.com/talks/wat
  61. Most of us are coding with an incomplete mental model

    of our programming language … but most of our bugs come from poor mental models of our domains 61
  62. Outline 62 •Mental model alignment theory •Case Studies •Puzzles •Problems

    •Programs •Tools for Thought
  63. Tools for communicating our mental models to a system 63

  64. Writing Tests count_os(“strangeloooop”) = 4 count_os(“object oriented”) = 2 count_os(“functional”)

    = 1 64 Each test is a communication of our mental model to the program
  65. count_os(“strangeloooop”) = 4 count_os(“object oriented”) = 2 count_os(“functional”) = 1

    count_os(“”) = 1337 65 We write tests because we want failure to happen early!
  66. Type annotations: another way to express our mental models count_os

    : string -> int 66
  67. Checkable specs (or fancier types) = more precise communication of

    mental models count_os : s:string -> n:int | 0 <= n <= len(s) 67
  68. 68 Solver-aided tools: directly run our mental models as programs?

  69. Program Synthesis 69

  70. Procedural Generation 70

  71. Solver-aided tools: directly run our mental models as programs? 71

    …but how do we form correct mental models of the solution space, navigate it, and change it?
  72. Cognitive Artifacts 72 David Krakauer Competitive: enhance our abilities when

    we have the artifact, but leave us just as powerless as we were before using it when we stop using it. Complementary: give us new abilities that we can internalize and use when the artifact is discarded.
  73. Cognitive Artifacts 73 GPS navigation: as a driver vs. as

    a passenger
  74. Cognitive Artifacts 74 Changing our relationships with the systems we’re

    a part of
  75. Takeaways 75 1. Automated systems that develop mental models are

    more powerful than ones that make them obsolete 2. More research is needed to understand how!
  76. Collaborate! 76 https://go.ncsu.edu/poem * Apply for a Ph.D. in Computer

    Science at NC State! Deadline December 15 * Faculty/industry collaborators welcome!
  77. Chris Martens go.ncsu.edu/martens context.adventure@gmail.com Twitter: @chrisamaphone 77

  78. Extra Slides 78

  79. Neural Networks 79 Pile of annotated data New example New

    annotation! MODEL
  80. Failing is Learning 80 Type Errors

  81. Zendo 81 (Kory Heath, 1999) Follows the rule Doesn’t follow

    the rule
  82. Zendo 82 (Kory Heath, 1999) Guesser: Does this example follow

    the rule?
  83. Zendo 83 (Kory Heath, 1999) Rulemaker: no.

  84. Zendo 84 (Kory Heath, 1999) Guesser: is the rule that

    there must not be a pentagon? Follows the rule Doesn’t follow the rule
  85. Zendo 85 (Kory Heath, 1999) Rulemaker: no. Counterexample: Doesn’t follow

    the proposed rule, follows my rule
  86. Zendo 86 (Kory Heath, 1999)