Upgrade to Pro — share decks privately, control downloads, hide ads and more …

AI and consciousness

AI and consciousness

Basics of Philosophy of Mind and trends in AI from the beginning until today. What Large Language Models can tell us about The Science of Consciousness.

Andreas Chatzopoulos

October 19, 2023
Tweet

More Decks by Andreas Chatzopoulos

Other Decks in Education

Transcript

  1. Philosophical Background The mind-body problem Dualism The mental and the

    material are two different substances Problem: How can they interact? Materialism The mind seems to be closely related to the brain There are only one kind of substance - physical matter Are mental phenomena part of our physical world?
  2. Philosophical Background Physicalism Closely related to materialism Everything can be

    explained in terms of physics Everything follows the laws of physics Cognitive aspects can be explained by the physical interaction of neurons Everything is physics in the end
  3. Philosophical Background Emergence Water analogy Have properties like wetness, transparency

    etc. Consists of water molecules that do not have these properties in themselves - Emerges from a large collection of molecules -> Consciousness emerges from neurons that do not have conscious properties Consciousness is an emergent phenomenon of the brain
  4. Philosophical Background Emergence Critique Creates properties out of thin air

    - says nothing about how they emerge Water analogy is flawed: Properties like wetness and transparency can be fully explained by behaviour of water molecules -> We must find a way to explain how neurons gives rise to consciousness Consciousness is an emergent phenomenon of the brain
  5. Philosophical Challenges The hard problem How do we find the

    bridge laws that connect these levels?
  6. Philosophical Challenges Phenomenal consciousness How is it generated in the

    brain? How can one explain it scientifically? Which place does it hold in a scientific worldview?
  7. Philosophical Challenges The knowledge argument Future with a complete neuroscience

    that knows exactly everything about the brain Mary is a neuroscientist that is born and raised in a completely colourless room What happens when she leaves the room? Even though she knows everything about brain processes, she still learns something new by experiencing colours for the first time
  8. Philosophical Challenges The knowledge argument Neuroscience cannot capture the phenomenal

    experiences of seeing colours - no matter how complete it is a science Even if Mary knows everything about how colours are processed in the brain, she has never experienced it herself -> Argument against physicalism
  9. Philosophical Challenges Unsolved problems Many theories about how conscious experiences

    arises Still a lot of problems unsolved How can we explain subjective experiences in a scientifically objective way? What is phenomenal experiences and how can they be explained scientifically?
  10. Technological Challenges British mathematician that lay the theoretical foundation for

    modern computers In 1950, he was one of the first to suggest Artificial Intelligence as a possibility AI as a field was founded at the Darthmound summer conference of 1956 Alan Turing 1912 - 1954
  11. Technological Challenges The Turing test The interrogator tries to determine

    which one is the machine and which one is the human The machine tries to fool the interrogator into believing it is human If the machine succeeds -> It can think as a human
  12. Technological Challenges AI in the beginning Programs that proved logical

    theorems - to reason logically was considered an important part of human intelligence Early on, this approach led to successful development of chess programs and the like
  13. Technological Challenges Critique of Symbolic AI Task: Match incoming questions

    with predetermined answers Questions and answers are in Chinese, and the person in the room uses a rulebook since he doesn't understand Chinese For an external observer, it appears as someone in the room understands Chinese
  14. Technological Challenges Critique of Symbolic AI Point: All computers that

    operates from predetermined rules are like the Chinese room There are no real understanding to be found anywhere -> A computer cannot fully emulate the brain with rule- based symbol manipulation
  15. Technological Challenges Different approaches Symbolic AI Assumption: All of human

    intelligence can be expressed as a set of logical symbols and rules Uses rules and algorithms to represent knowledge and simulate human reasoning Just a matter of explicitly coding human knowledge into logical rules and processes Artificial Neural Networks ( ANNs) Inspired by the functions of the neurons in the human brain - interconnected nodes that sends signals to each other Learns by itself instead of being programmed Tweaks signal strength between nodes to simulate brain learning mechanisms Requires large amounts of data to learn from
  16. Technological Challenges AI in modern times ANNs didn't took hold

    until around 2010, because of: Performance: Before that, computers weren't able to handle the large amount of parallell processing required Data availability: ANNs learns by training on very large amounts of data - nowadays this can be acquired through the internet
  17. Large Language Models Transformer architecture New architecture for Artificial Neural

    Networks, developed since 2017 Generates realistic, human-like conversations by predicting the next word in sentences
  18. Large Language Models Transformer architecture corpus a robot must obey

    a robot must obey The network learns to string together sentences by looking at huge corpus of text data (basically the whole internet)
  19. Large Language Models Are they conscious? Do they have real

    understanding? Phenomenal experiences? Not far fetched to see them pass the Turing Test - Is this enough to view them as conscious?
  20. Large Language Models Are they conscious? Lots of different theories

    about what makes a system conscious Requires built-in models of the world Requires a specific architecture Requires a specific biological properties Requires senses and embodiment Selected views:
  21. Large Language Models Are they conscious? Requires built-in models of

    the world Requires a specific architecture Requires a specific biological properties Requires senses and embodiment
  22. Shouldn't be impossible either, it's just a matter of time.

    Shouldn't be impossible to implement - someone is surely working on it Research indicates that some LLM-based systems already have this. If this is true, no AI system can ever be conscious. Large Language Models Are they conscious? Requires built-in models of the world Requires a specific architecture Requires a specific biological properties Requires senses and embodiment
  23. If this is true, no AI system can ever be

    conscious. Large Language Models Are they conscious? Requires a specific biological properties This is basically the only view that entails a hard limit If biological brains are a requirement for consciousness, we will never be able to emulate it artificially
  24. Large Language Models Are they conscious? LLMs are not conscious

    at the moment, but if we believe that consciousness can be implemented in something other than biological brains, it may only be a matter of time The are already behaving intelligently, which should be viewed as a different property than consciousness Philosophical challenges about phenomenal experiences are still unsolved, but development of LLMs might give us some insight into it