Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Cognitive explanations and Large Language Models

Cognitive explanations and Large Language Models

Presented at the SweCog conference in Gothenburg, october 2023

Andreas Chatzopoulos

October 19, 2023
Tweet

More Decks by Andreas Chatzopoulos

Other Decks in Science

Transcript

  1. UNIVERSITY OF GOTHENBURG Cognitive explanations and Large Language Models ANDREAS

    CHATZOPOULOS, DEPARTMENT OF APPLIED INFORMATION TECHNOLOGY 1
  2. UNIVERSITY OF GOTHENBURG Types of explanation 2 •Explains the behavior

    of a complex system with a model of the mechanistic interaction between its parts •Bottom-up approach that can capture emergent phenomena •Uses dynamical models to study qualitative features, regardless of component details •Describes how a system evolves over time Mechanistic explanation Dynamical explanation
  3. UNIVERSITY OF GOTHENBURG Models and simulations 4 •Based on discrete

    space-time structure •Set of possible states are assumed to be discrete •Underlying space-time structure and possible states are continuous •Formulated in differential equations Discrete simulations Continuous simulations
  4. UNIVERSITY OF GOTHENBURG Simulations and explanations 5 Discrete simulations Continuous

    simulations Mechanistic models Dynamical models Mechanistic explanations Dynamical explanations
  5. UNIVERSITY OF GOTHENBURG MECHANISTIC? •Captures causal relationship between word and

    concepts DYNAMICAL? •Learns the statistical relationship between words and concepts Large Language Models 7 •Not models of the brain to begin with •How shall we think about this?
  6. UNIVERSITY OF GOTHENBURG HOW - ACTUAL EXPLANATION ( HAE )

    • Illustrates a phenomena in the way it actually occurs • A model of how things actually are HOW - POSSIBLY EXPLANATION ( HPE ) • Propositional model of how a phenomena might possibly occur • A model of how things could possibly be Philosophical analysis 9
  7. UNIVERSITY OF GOTHENBURG Epistemic plausability actual possibly plausibly • Normative

    scale: Progress entails moving the explanations towards actual ( HAE ) • A hypotheses moves towards corraboration 10
  8. UNIVERSITY OF GOTHENBURG Philosophical analysis • The relationship is all

    about similarities: How similar is the model to the target? • A matter of degrees - models represent a target more or less • How-actual models represent the target fully • How-possibly models do not represent anything at all 12 ( Stuart Glennan)
  9. UNIVERSITY OF GOTHENBURG Philosophical analysis 13 ( Stuart Glennan) 🔴

    🟠 🔴 🟠 ? 🔴 🟠 HOW - ACTUAL HOW - POSSIBLY
  10. UNIVERSITY OF GOTHENBURG Philosophical analysis Instead of dividing models into

    possibly-actual: • Adjust their similarity requirements • If a model gets decreased similarity requirements, it may succeed in representing a target, if only roughly 14 ( Stuart Glennan)
  11. UNIVERSITY OF GOTHENBURG Philosophical analysis 15 🔴 🟠 🔴 🟠

    🔴 🟠 🔴 🟠 🟢 🔵 SIMILARITY : 100% SIMILARITY : 50% • Similarity requirements of 100% renders this model false • Similarity requirements of 50% renders the model true ( Stuart Glennan)
  12. UNIVERSITY OF GOTHENBURG Philosophical analysis HOW - ROUGHLY EXPLANATION (

    HRE ) • Model that is held to less strict similarity requirements • Not to be viewed as a hypotheses that might possibly be true but a model with lesser reguirements that enables is to actually represent a target • Even “false” models can highlight certain “true” features that we can learn from 16 🔴 🟠 🔴 🟠 🟢 🔵 ( Stuart Glennan)
  13. UNIVERSITY OF GOTHENBURG Example: Philosophical analysis TIM VAN GELDER’S DYMANICAL

    SYSTEMS APPROACH • A dynamical view as alternative to the computational model • Continuous interactions described in differential equations • Makes no claim to be realistic model of the brain • Aim is to capture certain important features rather than being a 1 - 1 representation 17 ( Stuart Glennan)
  14. UNIVERSITY OF GOTHENBURG Philosophical analysis TIM VAN GELDER’S DYMANICAL SYSTEMS

    APPROACH • Model based on oscillations that captures some aspects of decision-making that elude classical models • In some respects, this is a good model of human decision making mechanisms • Replicates certain features in a better way 18 ( Stuart Glennan) Example:
  15. UNIVERSITY OF GOTHENBURG Philosophical analysis 19 SIMILARITY : 50% •

    Similarity requirements of 50% makes it a roughly represents its target 🔴 🟠 🔴 🟠 🟢 🔵 HOW - ROUGHLY MODEL Dynamical Systems Approach ( Stuart Glennan)
  16. UNIVERSITY OF GOTHENBURG Large Language Models 20 •Based on Artificial

    Neural Networks implemented in a transformer architecure •Not intended as representation of the brain •Could they teach us something about the brain, regardless?
  17. UNIVERSITY OF GOTHENBURG 22 🔴 🔴 🟠 🟢 🔵 Large

    Language Model HOW - ROUGHLY MODEL? Large Language Models • Can teach us things even it it wasn't intended to do so
  18. UNIVERSITY OF GOTHENBURG 23 Large Language Model HOW - ROUGHLY

    MODEL? 🔴 🔴 🟠 🟢 🔵 🔴 🟠 🔴 🟠 🟢 🔵 Dynamical System Approach HOW - ROUGHLY MODEL • Similarity requirements of 50% makes it how-roughly Large Language Models • Similarity requirements of 25% makes it how-roughly
  19. UNIVERSITY OF GOTHENBURG 24 Large Language Model • Not intended

    to be representation of the brain at all 🔴 🔴 🟠 🟢 🔵 🔴 🟠 🔴 🟠 🟢 🔵 Dynamical System Approach • Intended to represent certain features of the brain Large Language Models
  20. UNIVERSITY OF GOTHENBURG • LLMs cannot be categorized as mechanistic

    or dynamical models – they are also not intended as representations of the brain • Even so, a philosophical analysis suggests that LLMs could be viewed as how-roughly explanations if we lower the similarity requirements according to Glennan's model • This way, they may be able to tell us important things about cognitive processes in the same way that dymanical models tells us something about the brain without being intended as full reprensentations Summary 25