Basics of Philosophy of Mind and trends in AI from the beginning until today. What Large Language Models can tell us about The Science of Consciousness.
material are two different substances Problem: How can they interact? Materialism The mind seems to be closely related to the brain There are only one kind of substance - physical matter Are mental phenomena part of our physical world?
explained in terms of physics Everything follows the laws of physics Cognitive aspects can be explained by the physical interaction of neurons Everything is physics in the end
etc. Consists of water molecules that do not have these properties in themselves - Emerges from a large collection of molecules -> Consciousness emerges from neurons that do not have conscious properties Consciousness is an emergent phenomenon of the brain
- says nothing about how they emerge Water analogy is flawed: Properties like wetness and transparency can be fully explained by behaviour of water molecules -> We must find a way to explain how neurons gives rise to consciousness Consciousness is an emergent phenomenon of the brain
that knows exactly everything about the brain Mary is a neuroscientist that is born and raised in a completely colourless room What happens when she leaves the room? Even though she knows everything about brain processes, she still learns something new by experiencing colours for the first time
experiences of seeing colours - no matter how complete it is a science Even if Mary knows everything about how colours are processed in the brain, she has never experienced it herself -> Argument against physicalism
arises Still a lot of problems unsolved How can we explain subjective experiences in a scientifically objective way? What is phenomenal experiences and how can they be explained scientifically?
modern computers In 1950, he was one of the first to suggest Artificial Intelligence as a possibility AI as a field was founded at the Darthmound summer conference of 1956 Alan Turing 1912 - 1954
which one is the machine and which one is the human The machine tries to fool the interrogator into believing it is human If the machine succeeds -> It can think as a human
theorems - to reason logically was considered an important part of human intelligence Early on, this approach led to successful development of chess programs and the like
with predetermined answers Questions and answers are in Chinese, and the person in the room uses a rulebook since he doesn't understand Chinese For an external observer, it appears as someone in the room understands Chinese
operates from predetermined rules are like the Chinese room There are no real understanding to be found anywhere -> A computer cannot fully emulate the brain with rule- based symbol manipulation
intelligence can be expressed as a set of logical symbols and rules Uses rules and algorithms to represent knowledge and simulate human reasoning Just a matter of explicitly coding human knowledge into logical rules and processes Artificial Neural Networks ( ANNs) Inspired by the functions of the neurons in the human brain - interconnected nodes that sends signals to each other Learns by itself instead of being programmed Tweaks signal strength between nodes to simulate brain learning mechanisms Requires large amounts of data to learn from
until around 2010, because of: Performance: Before that, computers weren't able to handle the large amount of parallell processing required Data availability: ANNs learns by training on very large amounts of data - nowadays this can be acquired through the internet
about what makes a system conscious Requires built-in models of the world Requires a specific architecture Requires a specific biological properties Requires senses and embodiment Selected views:
Shouldn't be impossible to implement - someone is surely working on it Research indicates that some LLM-based systems already have this. If this is true, no AI system can ever be conscious. Large Language Models Are they conscious? Requires built-in models of the world Requires a specific architecture Requires a specific biological properties Requires senses and embodiment
conscious. Large Language Models Are they conscious? Requires a specific biological properties This is basically the only view that entails a hard limit If biological brains are a requirement for consciousness, we will never be able to emulate it artificially
at the moment, but if we believe that consciousness can be implemented in something other than biological brains, it may only be a matter of time The are already behaving intelligently, which should be viewed as a different property than consciousness Philosophical challenges about phenomenal experiences are still unsolved, but development of LLMs might give us some insight into it