Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Interpretable Deep Learning Model for Decoding ...

Yeganeh Farahzadi
November 23, 2024
3

Interpretable Deep Learning Model for Decoding Hypnotic Experience from Raw EEG Data

Yeganeh Farahzadi

November 23, 2024
Tweet

Transcript

  1. Interpretable for Decoding Hypnotic Experience from Yeganeh Farahzadi, Morteza Ansarina,

    Zoltan Kekecs Raw EEG Data Deep Learning Model SCEH 74rd Annual Workshops & Scientific Program (October 2023) Eötvös Loránd University, University of Luxembourg
  2. Outline • Introduction: Deep Learning and its potential application in

    understanding the cognitive neuroscience of hypnosis. • Methods: Implementing neural network using • Results and interpreting the implications of this study • Limitations and future directions Deep Learning Raw EEG Data
  3. Bridge the Gap between Mind & Brain Jensen et al.,

    (2017). New directions in hypnosis research: strategies for advancing the cognitive and clinical neuroscience of hypnosis. Kihlstrom, J. F. (2013). Neuro-hypnotism: Prospects for hypnosis and neuroscience. Cortex, 49(2), 365-374. • What are the neural mechanisms that support responses to hypnotic induction and suggestions? • What are the neurophysiological correlates of hypnotic suggestibility?
  4. Bridge the Gap between Mind & Brain Jensen et al.,

    (2017). New directions in hypnosis research: strategies for advancing the cognitive and clinical neuroscience of hypnosis. Kihlstrom, J. F. (2013). Neuro-hypnotism: Prospects for hypnosis and neuroscience. Cortex, 49(2), 365-374. • What are the neural mechanisms that support responses to hypnotic induction and suggestions? • What are the neurophysiological correlates of hypnotic suggestibility?
  5. Bridge the Gap between Mind & Brain Useful of data

    for the task at hand Data Representation De-noising
  6. Bridge the Gap between Mind & Brain Useful of data

    for the task at hand Data Representation Task Deep or Superficial? De-noising
  7. Bridge the Gap between Mind & Brain Useful of data

    for the task at hand Data Representation Task Deep or Superficial? De-noising A different way to look at data, to represent or encode the data. Time: Visual Representation Exact Representation Time elapsing measurement Quick inspection Arithmetic calculation Visual Representation
  8. Is the circle Pink or Blue? Task Data X >

    0 X < 0 Rotation Representation & Rules
  9. Is the circle Pink or Blue? Task Data X >

    0 X < 0 I felt an incredible sense of calm and relaxation Is the sentence I had a difficult time getting into a trance. Rotation Positive or Negative? Representation & Rules
  10. Is the circle Pink or Blue? Task Data X >

    0 X < 0 I felt an incredible sense of calm and relaxation Is the sentence I had a difficult time getting into a trance. "difficult," “time,” “get,” “trance,” Tokenisation, Removal of stop words "feel," “incredible,” “sense,” “calm,” “relaxation.” Rotation Positive or Negative? Representation & Rules
  11. Is the circle Pink or Blue? Task Data X >

    0 X < 0 I felt an incredible sense of calm and relaxation Is the sentence I had a difficult time getting into a trance. "difficult," “time,” “get,” “trance,” Tokenisation, Removal of stop words "feel," “incredible,” “sense,” “calm,” “relaxation.” Rotation Affective Norms for English Words (AFINN) Positive or Negative? Representation & Rules
  12. Is the circle Pink or Blue? Task Data X >

    0 X < 0 I felt an incredible sense of calm and relaxation Is the sentence I had a difficult time getting into a trance. "difficult," “time,” “get,” “trance,” Tokenisation, Removal of stop words "feel," “incredible,” “sense,” “calm,” “relaxation.” Rotation Affective Norms for English Words (AFINN) Positive or Negative? 0 4 0 3 4 0 0 0 -3 Representation & Rules
  13. Is the circle Pink or Blue? Task Data X >

    0 X < 0 I felt an incredible sense of calm and relaxation Is the sentence I had a difficult time getting into a trance. "difficult," “time,” “get,” “trance,” Tokenisation, Removal of stop words "feel," “incredible,” “sense,” “calm,” “relaxation.” Rotation Affective Norms for English Words (AFINN) Positive or Negative? 0 4 0 3 4 0 0 0 -3 Sum > 0 Sum < 0 Representation & Rules
  14. Is the circle Pink or Blue? Task Data X >

    0 X < 0 I felt an incredible sense of calm and relaxation Is the sentence I had a difficult time getting into a trance. "difficult," “time,” “get,” “trance,” Tokenisation, Removal of stop words "feel," “incredible,” “sense,” “calm,” “relaxation.” Rotation Affective Norms for English Words (AFINN) I went into a deep trance, but it was an unsettling experience. Positive or Negative? 0 4 0 3 4 0 0 0 -3 Sum > 0 Sum < 0 Representation & Rules
  15. Is the circle Pink or Blue? Task Is the person

    in Deep hypnosis or Superficial? Data X > 0 X < 0 I felt an incredible sense of calm and relaxation Is the sentence I had a difficult time getting into a trance. "difficult," “time,” “get,” “trance,” Tokenisation, Removal of stop words "feel," “incredible,” “sense,” “calm,” “relaxation.” Rotation Affective Norms for English Words (AFINN) I went into a deep trance, but it was an unsettling experience. Positive or Negative? 0 4 0 3 4 0 0 0 -3 Sum > 0 Sum < 0 Representation & Rules
  16. Is the circle Pink or Blue? Task Is the person

    in Deep hypnosis or Superficial? Data Gamma X > 0 X < 0 I felt an incredible sense of calm and relaxation Is the sentence I had a difficult time getting into a trance. "difficult," “time,” “get,” “trance,” Tokenisation, Removal of stop words "feel," “incredible,” “sense,” “calm,” “relaxation.” Rotation Processing Affective Norms for English Words (AFINN) I went into a deep trance, but it was an unsettling experience. Positive or Negative? 0 4 0 3 4 0 0 0 -3 Sum > 0 Sum < 0 Representation & Rules
  17. Solution… Data Positive or Negative? Representation & Rules Task I

    felt an incredible sense of calm and relaxation
  18. Solution… Data Representation & Rules Positive or Negative? Task I

    felt an incredible sense of calm and relaxation
  19. Solution… Data Representation & Rules Positive or Negative? Task I

    felt an incredible sense of calm and relaxation
  20. Solution… Data Representation & Rules Positive or Negative? Task I

    felt an incredible sense of calm and relaxation
  21. Solution… Data Representation & Rules Positive or Negative? Task I

    felt an incredible sense of calm and relaxation
  22. Solution… Data Representation & Rules Positive or Negative? Task I

    felt an incredible sense of calm and relaxation
  23. Solution… Data Representation & Rules Positive or Negative? Task I

    felt an incredible sense of calm and relaxation Predict Masked Words
  24. Solution… Data Representation & Rules Positive or Negative? Downstream Tasks

    Fine-tuning Task I felt an incredible sense of calm and relaxation Predict Masked Words
  25. Input Layer 1st Hidden Layer 2nd Hidden Layer Output Layer

    Chollet, F. (2021). Deep learning with Python.
  26. Input Layer 1st Hidden Layer 2nd Hidden Layer Output Layer

    3 ∑ i=1 wi xi Chollet, F. (2021). Deep learning with Python.
  27. w1 w2 w3 x1 x2 x3 Input Layer 1st Hidden

    Layer 2nd Hidden Layer Output Layer 3 ∑ i=1 wi xi Chollet, F. (2021). Deep learning with Python.
  28. w1 w2 w3 x1 x2 x3 Input Layer 1st Hidden

    Layer 2nd Hidden Layer Output Layer 3 ∑ i=1 wi xi Chollet, F. (2021). Deep learning with Python.
  29. w1 w2 w3 x1 x2 x3 Input Layer 1st Hidden

    Layer 2nd Hidden Layer Output Layer 3 ∑ i=1 wi xi Chollet, F. (2021). Deep learning with Python.
  30. w1 w2 w3 x1 x2 x3 Learnable Parameters (weights) Input

    Layer 1st Hidden Layer 2nd Hidden Layer Output Layer 3 ∑ i=1 wi xi Chollet, F. (2021). Deep learning with Python.
  31. w1 w2 w3 x1 x2 x3 The “Deep” in deep

    learning stands for this idea of successive learning of representations. Learnable Parameters (weights) Input Layer 1st Hidden Layer 2nd Hidden Layer Output Layer 3 ∑ i=1 wi xi Chollet, F. (2021). Deep learning with Python.
  32. w1 w2 w3 x1 x2 x3 The “Deep” in deep

    learning stands for this idea of successive learning of representations. Learnable Parameters (weights) Input Layer 1st Hidden Layer 2nd Hidden Layer Output Layer 3 ∑ i=1 wi xi 1st layer of representation 2nd layer of representation Chollet, F. (2021). Deep learning with Python.
  33. w1 w2 w3 x1 x2 x3 The “Deep” in deep

    learning stands for this idea of successive learning of representations. Learnable Parameters (weights) Input Layer 1st Hidden Layer 2nd Hidden Layer Output Layer 3 ∑ i=1 wi xi 1st layer of representation 2nd layer of representation ̂ y Chollet, F. (2021). Deep learning with Python.
  34. w1 w2 w3 x1 x2 x3 The “Deep” in deep

    learning stands for this idea of successive learning of representations. Learnable Parameters (weights) Input Layer 1st Hidden Layer 2nd Hidden Layer Output Layer 3 ∑ i=1 wi xi 1st layer of representation 2nd layer of representation ytrue Loss Score ̂ y Chollet, F. (2021). Deep learning with Python.
  35. w1 w2 w3 x1 x2 x3 The “Deep” in deep

    learning stands for this idea of successive learning of representations. Learnable Parameters (weights) Input Layer 1st Hidden Layer 2nd Hidden Layer Output Layer 3 ∑ i=1 wi xi 1st layer of representation 2nd layer of representation ytrue Loss Score Feedback Signal ̂ y Chollet, F. (2021). Deep learning with Python.
  36. w1 w2 w3 x1 x2 x3 The “Deep” in deep

    learning stands for this idea of successive learning of representations. Learnable Parameters (weights) Input Layer 1st Hidden Layer 2nd Hidden Layer Output Layer 3 ∑ i=1 wi xi 1st layer of representation 2nd layer of representation ytrue Loss Score Feedback Signal ̂ y Chollet, F. (2021). Deep learning with Python.
  37. w1 w2 w3 x1 x2 x3 The “Deep” in deep

    learning stands for this idea of successive learning of representations. Learnable Parameters (weights) Input Layer 1st Hidden Layer 2nd Hidden Layer Output Layer 3 ∑ i=1 wi xi 1st layer of representation 2nd layer of representation ytrue Loss Score Feedback Signal ̂ y Chollet, F. (2021). Deep learning with Python.
  38. w1 w2 w3 x1 x2 x3 The “Deep” in deep

    learning stands for this idea of successive learning of representations. Learnable Parameters (weights) Input Layer 1st Hidden Layer 2nd Hidden Layer Output Layer 3 ∑ i=1 wi xi 1st layer of representation 2nd layer of representation ytrue Loss Score Feedback Signal ̂ y Chollet, F. (2021). Deep learning with Python.
  39. w1 w2 w3 x1 x2 x3 The “Deep” in deep

    learning stands for this idea of successive learning of representations. Learnable Parameters (weights) Input Layer 1st Hidden Layer 2nd Hidden Layer Output Layer 3 ∑ i=1 wi xi 1st layer of representation 2nd layer of representation ytrue Loss Score Feedback Signal ̂ y Chollet, F. (2021). Deep learning with Python. Recurrent Layers Encode temporal information
  40. w1 w2 w3 x1 x2 x3 The “Deep” in deep

    learning stands for this idea of successive learning of representations. Learnable Parameters (weights) Input Layer 1st Hidden Layer 2nd Hidden Layer Output Layer 3 ∑ i=1 wi xi 1st layer of representation 2nd layer of representation ytrue Loss Score Feedback Signal ̂ y Chollet, F. (2021). Deep learning with Python. Recurrent Layers Encode temporal information Convolutional Layers (Conv2D) Encode spatial information
  41. w1 w2 w3 x1 x2 x3 The “Deep” in deep

    learning stands for this idea of successive learning of representations. Learnable Parameters (weights) Input Layer 1st Hidden Layer 2nd Hidden Layer Output Layer 3 ∑ i=1 wi xi 1st layer of representation 2nd layer of representation ytrue Loss Score Feedback Signal ̂ y Chollet, F. (2021). Deep learning with Python. Recurrent Layers Encode temporal information (Conv1D) Encode spatial information Convolutional Layers (Conv2D) Encode spatial information
  42. Objective & Data Data Representation & Rules Downstream Tasks Task

    Predict Masked Signal Values 52 Participants listening to a neutral hypnosis induction (Elkins, 2014) Level of Hypnosis Depth 11-points Likert Scale
  43. Objective & Data Data Representation & Rules Superficial or Deep?

    => 5 5 < Downstream Tasks Task Predict Masked Signal Values 52 Participants listening to a neutral hypnosis induction (Elkins, 2014) Level of Hypnosis Depth 11-points Likert Scale
  44. Objective & Data Data Representation & Rules Superficial or Deep?

    => 5 5 < Downstream Tasks Task Predict Masked Signal Values 52 Participants listening to a neutral hypnosis induction (Elkins, 2014) Level of Hypnosis Depth 11-points Likert Scale EEG downsampled to 128 Hz, and normalised
  45. Model Time Channels Spatial Encoder (Conv1D) Temporal Encoder (Recurrent) XCNN

    ℝ|segments|×|time|×|features| ℝ|segments|×|time|×|channels| H Task Predict Masked Signal Values Representation & Rules Data
  46. Model Time Channels Spatial Encoder (Conv1D) Temporal Encoder (Recurrent) XCNN

    ℝ|segments|×|time|×|features| ℝ|segments|×|time|×|channels| H Segm ents Task Predict Masked Signal Values Representation & Rules Data
  47. Model Time Channels Spatial Encoder (Conv1D) Temporal Encoder (Recurrent) XCNN

    ℝ|segments|×|time|×|features| ℝ|segments|×|time|×|channels| ℝ|segments|×|features| H Segm ents Task Predict Masked Signal Values Representation & Rules Data
  48. Model Time Channels Spatial Encoder (Conv1D) Temporal Encoder (Recurrent) XCNN

    ℝ|segments|×|time|×|features| ℝ|segments|×|time|×|channels| ℝ|segments|×|features| H Segm ents ̂ XCNN Temporal Decoder (Recurrent) Spatial Decoder (Conv1D) Task Predict Masked Signal Values Representation & Rules Data
  49. Model Time Channels Spatial Encoder (Conv1D) Temporal Encoder (Recurrent) XCNN

    ℝ|segments|×|time|×|features| ℝ|segments|×|time|×|channels| ℝ|segments|×|features| H Segm ents ̂ XCNN Temporal Decoder (Recurrent) Spatial Decoder (Conv1D) Task Predict Masked Signal Values Representation & Rules Data
  50. Model Time Channels Spatial Encoder (Conv1D) Temporal Encoder (Recurrent) XCNN

    ℝ|segments|×|time|×|features| ℝ|segments|×|time|×|channels| ℝ|segments|×|features| H Segm ents ̂ XCNN Temporal Decoder (Recurrent) Spatial Decoder (Conv1D) Task Predict Masked Signal Values Representation & Rules Data Binary Classification Shallow or Deep? => 5 5 <
  51. Model Time Channels Spatial Encoder (Conv1D) Temporal Encoder (Recurrent) XCNN

    ℝ|segments|×|time|×|features| ℝ|segments|×|time|×|channels| ℝ|segments|×|features| H Segm ents ̂ XCNN Temporal Decoder (Recurrent) Spatial Decoder (Conv1D) Implementation Details Train/test split: 80% train, 20% validation Python Module: PyTorch (v2.0.1). Batch Size: 256 Learning rate: 1e-2 Optimiser: Adam Loss function: MSE, Cross-Entropy Task Predict Masked Signal Values Representation & Rules Data Binary Classification Shallow or Deep? => 5 5 <
  52. Model Time Channels Spatial Encoder (Conv1D) Temporal Encoder (Recurrent) XCNN

    ℝ|segments|×|time|×|features| ℝ|segments|×|time|×|channels| ℝ|segments|×|features| H Segm ents ̂ XCNN Temporal Decoder (Recurrent) Spatial Decoder (Conv1D) Implementation Details Train/test split: 80% train, 20% validation Python Module: PyTorch (v2.0.1). Batch Size: 256 Learning rate: 1e-2 Optimiser: Adam Loss function: MSE, Cross-Entropy Task Predict Masked Signal Values Representation & Rules Data Binary Classification Shallow or Deep? => 5 5 <
  53. Model Time Channels Spatial Encoder (Conv1D) Temporal Encoder (Recurrent) XCNN

    ℝ|segments|×|time|×|features| ℝ|segments|×|time|×|channels| ℝ|segments|×|features| H Segm ents ̂ XCNN Temporal Decoder (Recurrent) Spatial Decoder (Conv1D) Train Implementation Details Train/test split: 80% train, 20% validation Python Module: PyTorch (v2.0.1). Batch Size: 256 Learning rate: 1e-2 Optimiser: Adam Loss function: MSE, Cross-Entropy Task Predict Masked Signal Values Representation & Rules Data Binary Classification Shallow or Deep? => 5 5 <
  54. Model Time Channels Spatial Encoder (Conv1D) Temporal Encoder (Recurrent) XCNN

    ℝ|segments|×|time|×|features| ℝ|segments|×|time|×|channels| ℝ|segments|×|features| H Segm ents ̂ XCNN Temporal Decoder (Recurrent) Spatial Decoder (Conv1D) Train Val Implementation Details Train/test split: 80% train, 20% validation Python Module: PyTorch (v2.0.1). Batch Size: 256 Learning rate: 1e-2 Optimiser: Adam Loss function: MSE, Cross-Entropy Task Predict Masked Signal Values Representation & Rules Data Binary Classification Shallow or Deep? => 5 5 <
  55. Classification accuracy improved compared to traditional Machine Learning pipeline Farahzadi,

    Y., Alldredge, C., & Kekecs, Z. (2023). Neural Correlates of Hypnosis: Insights from EEG-Based Machine Learning Models.
  56. Classification accuracy improved compared to traditional Machine Learning pipeline Farahzadi,

    Y., Alldredge, C., & Kekecs, Z. (2023). Neural Correlates of Hypnosis: Insights from EEG-Based Machine Learning Models.
  57. Classification accuracy improved compared to traditional Machine Learning pipeline Farahzadi,

    Y., Alldredge, C., & Kekecs, Z. (2023). Neural Correlates of Hypnosis: Insights from EEG-Based Machine Learning Models. Implications • highlights the effectiveness of deep learning in handling raw EEG data with minimal preprocessing. • Takes us a step closer to addressing the multifaceted neural correlates of hypnosis.
  58. Classification accuracy improved compared to traditional Machine Learning pipeline Farahzadi,

    Y., Alldredge, C., & Kekecs, Z. (2023). Neural Correlates of Hypnosis: Insights from EEG-Based Machine Learning Models. Train/test Split across time axis Implications • highlights the effectiveness of deep learning in handling raw EEG data with minimal preprocessing. • Takes us a step closer to addressing the multifaceted neural correlates of hypnosis.
  59. Classification accuracy improved compared to traditional Machine Learning pipeline Farahzadi,

    Y., Alldredge, C., & Kekecs, Z. (2023). Neural Correlates of Hypnosis: Insights from EEG-Based Machine Learning Models. Train/test Split across time axis Train/test Split across subject axis Implications • highlights the effectiveness of deep learning in handling raw EEG data with minimal preprocessing. • Takes us a step closer to addressing the multifaceted neural correlates of hypnosis.
  60. Implications Limitations • Highlights the effectiveness of deep learning in

    handling raw EEG data with minimal preprocessing. • Takes us a step closer to addressing the multifaceted neural correlates of hypnosis.
  61. Implications • The train/test split was done along time axis.

    Limitations • Highlights the effectiveness of deep learning in handling raw EEG data with minimal preprocessing. • Takes us a step closer to addressing the multifaceted neural correlates of hypnosis.
  62. Implications • The train/test split was done along time axis.

    • Small sample size. Limitations • Highlights the effectiveness of deep learning in handling raw EEG data with minimal preprocessing. • Takes us a step closer to addressing the multifaceted neural correlates of hypnosis.
  63. Implications • The train/test split was done along time axis.

    • Small sample size. • Lack of any hyper-parameter Optimisations. Limitations • Highlights the effectiveness of deep learning in handling raw EEG data with minimal preprocessing. • Takes us a step closer to addressing the multifaceted neural correlates of hypnosis.
  64. Implications • The train/test split was done along time axis.

    • Small sample size. • Lack of any hyper-parameter Optimisations. Limitations Future Directions • Highlights the effectiveness of deep learning in handling raw EEG data with minimal preprocessing. • Takes us a step closer to addressing the multifaceted neural correlates of hypnosis.
  65. • Optimize hyperparamers of the model and incorporate larger sample

    size. Implications • The train/test split was done along time axis. • Small sample size. • Lack of any hyper-parameter Optimisations. Limitations Future Directions • Highlights the effectiveness of deep learning in handling raw EEG data with minimal preprocessing. • Takes us a step closer to addressing the multifaceted neural correlates of hypnosis.
  66. • Optimize hyperparamers of the model and incorporate larger sample

    size. • Employ AI interpretability methods to understand the internal representations of the model. Implications • The train/test split was done along time axis. • Small sample size. • Lack of any hyper-parameter Optimisations. Limitations Future Directions • Highlights the effectiveness of deep learning in handling raw EEG data with minimal preprocessing. • Takes us a step closer to addressing the multifaceted neural correlates of hypnosis.
  67. References Jensen, M. P., Jamieson, G. A., Lutz, A., Mazzoni,

    G., McGeown, W. J., Santarcangelo, E. L., ... & Terhune, D. B. (2017). New directions in hypnosis research: strategies for advancing the cognitive and clinical neuroscience of hypnosis. Neuroscience of consciousness, 2017(1), nix004. Kihlstrom, J. F. (2013). Neuro-hypnotism: Prospects for hypnosis and neuroscience. Cortex, 49(2), 365-374. Goodfellow, I. (2016). Deep Learning-Ian Goodfellow, Yoshua Bengio, Aaron Courville. Adapt. Comput. Mach. Learn. Chollet, F. (2021). Deep learning with Python. Roy, Y., Banville, H., Albuquerque, I., Gramfort, A., Falk, T. H., & Faubert, J. (2019). Deep learning-based electroencephalography analysis: a systematic review. Journal of neural engineering, 16(5), 051001. Thomas, A. W., Ré, C., & Poldrack, R. A. (2023). Benchmarking explanation methods for mental state decoding with deep learning models. NeuroImage, 273, 120109. Farahzadi, Y., Alldredge, C., & Kekecs, Z. (2023). Neural Correlates of Hypnosis: Insights from EEG-Based Machine Learning Models.