Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Defense Slides

Defense Slides

klivins

July 29, 2015
Tweet

More Decks by klivins

Other Decks in Education

Transcript

  1. Project Overview •  Goal: To study how relational cognition can

    be promoted via visuospatial processes Heavy emphasis on priming
  2. Presentation Plan •  Theoretic Overview •  Can vision and space

    shape relations? (Ex 1) •  Can vision and space shape more abstract relations? (Ex 2) •  What’s responsible? (Ex 3) •  Isolating Attention (Ex 4) •  Theoretic Implications
  3. What  Are  Rela+ons?   •  A relational representation says whether

    some k elements have some particular relationship – E.g., loves(John, Mary) •  Can be considered functions  
  4. Characteristics of Relations •  They are flexible –  Arity – 

    Role-filler objects •  They are order sensitive –  “I am defending my dissertation” •  Relation: Is-defending •  Element 1: me •  Element 2: my dissertation  
  5. Relational Cognition •  Works with these types of representations • 

    Is focused on the roles that things play, rather than the features they posses •  Requires one to sometimes ignore statistical regularities (features) in favor of role-base similarities
  6. Relational Cognition –  Is the bread and butter for: – 

    Analogy making (Doumas & Hummel, 2005; Gentner, 1983) –  Performing generalizations (Doumas & Hummel, 2005; Holyoak & Hummel, 2001; Hummel & Holyoak, 2003) –  Induction (Holland et al., 1986; Hummel & Holyoak, 2003) –  Is involved in: –  Language development and use (Gentner & Namy, 2006) –  Strategic planning (Halford et al., 1995) –  Categorization (Gentner & Kurtz, 2005; Murphy & Medin, 1985) –  Social cognition (Spellman & Holyoak, 1992)
  7. A “Problem” •  Relational reasoning is powerful and people generally

    like relations – Relations are judged to be more meaningful than feature based matches (Gentner, 1988) •  Relations are very difficult to think about – Tax working memory (e.g., Viskontas et al., 2004) •  So how can we promote relational reasoning and shape its trajectory?
  8. Priming •  A process wherein (typically non-explicit) exposure to a

    piece of information facilitates later use of that information or a related concept (Schunn & Dunbar, 1996)
  9. Relational Priming •  Three questions: – 1) Is it possible? – 2)

    To what extent is it possible? – 3) How might it work?
  10. Question 1 (Possible?) •  Relational “reminding” studies – Wharton et al.,

    (1994): reading stories can remind people of stories read earlier if there are structural similarities between them – Schunn & Dunbar (1996): solving relational problems can make solving structurally- consistent problems easier later on •  Seems possible
  11. Question 2 (How Much?) •  Conflicting evidence – Spellman et al.,

    (2001) •  Lexical decision task with letter strings and words •  Later pairs could exemplify same relations as earlier pairs •  No priming found unless explicit direction given – Bassock et al., (2008) •  Word pairs paired with addition facts •  Obligatory activation when word pairs were semantically aligned (e.g., “tulips-daises”)
  12. Question 2 (How Much?) •  Methodological differences to blame? – Do

    they access the mechanisms responsible for priming to different degrees?
  13. Question 3 (Mechanism?) •  Will need to fall out of

    how relations are represented •  Multiple accounts
  14. Relational Mental Representations •  Account 1 – Symbolic –  Relations

    are structured •  Basic set of elements that are compositional •  Abstract and discrete (Markman & Dietrich, 2000) •  E.g., SMT (Gentner, 1983)/SME (Falkenhainer, Forbus, & Gentner; 1986, 1989) –  Relations are analogs to predicate calculus »  Attends ( you, presentation ) –  Reasoning about relations is guided by structural constraints •  Priming takes place in a “reminding stage” unique from the reasoning stage which runs in parallel and on features
  15. Relational Mental Representations   •  Account 2 – Sub-symbolic – Distributed

    representations – E.g., Leech, Mareschal, & Cooper (2008) •  Relational reasoning as priming associations in context –  E.g., A:B::C:D tasks like “puppy:dog::kitten:__” •  Highly problematic –  Struggles to integrate multiple relations (Doumas & Richland, 2008) »  Chases(x,y), Chases(y,z) to Follows(a,b) and Follows(c,a) –  Far reaching mappings (French, 2008)
  16. Relational Mental Representations   •  Account 3 – Pluralism – Symbolic

    like structures with distributed representations – E.g., DORA (Doumas, Hummel, & Sandhofer, 2008)
  17. DORA Unpacked •  Lowest level is made up of distributed

    feature sets •  Build into propositional structures across layers of nodes •  Representations coded as roles and objects –  E.g., “Katherine gets her PhD” •  passer(Katherine) + passed(PhD) •  Binds through temporal asynchrony
  18. DORA on Priming •  Could be representation based – Relations have

    content – That content can be activated •  Could be attention based – Relational roles and fillers get fired at different times – Could changing firing order affect recognition and reasoning?
  19. Physical Priming •  Often involves semantic priming – Image schemas (Mandler

    1992) •  E.g., Richardson et al (2003): recognition times were faster when object images were presented congruently with verb image schemas •  E.g., Pedone et al., (2001): diagrammatic differences affect Duncker problem solutions – Visual Attention •  Grant & Spivey (2003) on the Duncker problem –  Thomas & Lleras (2007)
  20. Why Do More? •  Relational Tasks: – Need for role sensitivity

    – Need to reason over features •  E.g., cross-mappings – Need to show flexibility •  (objects change, relation might not)
  21. Project Goal •  To determine whether relations can be primed

    using visuospatial cues while performing tasks that require relational competency •  To then determine what mechanisms might be responsible for that priming
  22. Experiment 1   •  More specifically: To determine whether simple,

    spatial relations (above and below) can be primed on a feedback-based category-learning task using a subtle prime – Categories were made up of simple geometric shapes and their spatial locations on the x and y axes
  23. Relational Categories •  A  specific  type  of  category    

    •  Define  membership  based  on  some  common   rela+onal  structure  instead  of  member   features  (Gentner  &  Kurtz,  2005)  
  24. Relational Categories •  A specific type of category •  Define

    membership based on some common relational structure instead of member features (Gentner & Kurtz, 2005)
  25. Participants •  105 UC Merced undergraduate students were collected through

    the online system (SONA) and offered course credit for participation. – Thirteen of these participants were not included in final calculations due to lack of learning. – Resulted in 92 reported participants
  26. Design •  Each participant was assigned to one of three

    conditions – Control (No prime) – Vertical Prime – Horizontal Prime
  27. Priming •  Participants in a priming condition were told that

    they may occasionally see some blinking “dots”. –  The priming dots could appear in a vertical alignment or a horizontal fashion, blinking on and off alternatively •  They were white circles with a thin black outline and had a radius of 15 pixels •  One would appear for 500ms, then blink off. A gray screen would be shown for 100ms, then the other dot would blink on for another 500ms.
  28. Priming •  If participants were in a priming condition, they

    saw 5 iterations of the dot cycles at the very beginning of the experiment •  Then 3 more iterations after every 5 training trials
  29. Task Training •  They were asked to sit at a

    computer and told: –  They were going to see pairs of shapes –  Each pair would be positioned according to some “rule” –  If they thought a pair followed the “rule” they pressed “A”, if they thought it did not follow the “rule, they pressed “L” –  They would get feedback with every key press –  So, they would not be told the rule, but needed to determine it over the course of the experiment
  30. Task Training •  Training involved stimuli that were “relationally ambiguous”.

    – Used circles and squares – Two categories, with two possible values •  Occluder to the Left/Right •  Occluder Above/Below
  31. Category Examples + = A   B C   D

    Above/ Below   Rela+on   LeN/Right   Rela+on  
  32. Training •  One value from each category was randomly paired

    with a value from the other for the training phase and then associated with “correct”. The opposite was associated with incorrect. –  E.g., they could see A/C and B/D combinations OR A/D and B/C combinations –  Thus, either rule could be learned A/C   B/D   A/D   B/C    
  33. Training cont. •  Training began by presenting 8 examples of

    the same type – E.g., A/D, A/D, A/D, A/D, A/D, A/D, A/D, A/ D… random •  Specified by Clapper (2009) •  Counterbalanced across participants •  After the initial training sequence, the program counted until participants correctly classified 10 exemplars in a row.
  34. Test Phase •  Once at criterion, participants stopped getting feedback

    •  Told to use exactly the same rule •  If in a priming condition, all priming stopped. •  Then presented with 7 exemplars of each possible variable combinations – They would see A/C, A/D, B/C, and B/D exemplars.
  35. Test Phase cont. •  By presenting all possible combinations, and

    tracking responses on novel combinations we could discern which rule was learned.
  36. Test Phase cont. •  By presenting all possible combinations, and

    tracking responses on novel combinations we could discern which rule was learned. Training  Set   Tested  On   è A/D   =   ?   B/D   =   “L”   A/C   =   “A”  
  37. Rule Learning •  Rule was considered “learned” when no more

    than 3 inconsistent responses were made across the 14 novel pairings – Exception: both rules were reported (such data would look analogous to learning neither) •  They were considered to have learned both rules only when they explicitly reported both, and made no more than three inconsistent responses based on both rules.
  38. Results Control Condition Vertical Prime Horizontal Prime Horizontal Rule Learned

    13 7 15 Vertical Rule Learned 7 17 9 Both Rules Learned 11 5 8 No Rules Learned 4 5 4   A significant number of participants learned the rule associated with the presented prime. = 10.433, p < .05 € X2
  39. Discussion •  Findings – It does appear that relational category learning

    and relational reasoning can be primed •  Priming can be quite subtle – Visuo-spatial and attention inputs are sufficient for this effect •  Open Questions – Generalizability of findings
  40. Experiment 2 •  Very generally: Can vision and space shape

    more abstract relations? •  More specifically: Can visuospatial priming affect the probability of making a relational mapping under time constraints by exploiting that relation’s image schema?
  41. Participants •  Data from 243 participants were collected through UC

    Merced’s SONA system – The data from 18 participants were thrown due to those participants being unable to maintain the requisite task switching.
  42. Design •  24 verbs were selected – 8 with known horizontal

    image schemas – 8 with known vertical image schemas – 16 with weak-to-no image schemas •  Richardson, Spivey, Edelman, & Naples (2001); Richardson, Spivey, Barsalou, & McRae (2003); Meteyard & Vigliocco (2009) •  Normed via mTurk to have 70% agreement
  43. Verbs   Training Horizontal Vertical Filler Riding Chasing Pouring-on Kissing

    Talking-to Pulling Dropping Playing-with Balancing Pushing Hanging-from Resting-on Feeding Kicking Hoisting-up Cooking Sheltering Towing Lifting Cleaning Scolding Pointing-at Reaching-for Driving Hitting Hunting Bombing Opening Brushing Giving-to Climbing Performing-for
  44. Analogy Stimuli •  Visual analogy problems were created –  Simple

    line-drawings adapted from Richland et al. (2006) •  Normed to have 6 “objects” each •  2 images per relation, creating a “base” analog and a “target” analog –  Base shown on top, target shown on bottom –  An object in the top was circled in red, and participants had to select the thing in the bottom image that was “doing the same thing” as the circled thing.
  45. Method cont. •  Problem: Similar stimuli were used in a

    previous experiment and participants performed at ceiling (Livins & Doumas, in press) •  So, can we alter the amount of time that someone needs to be exposed to a crossmapping analogy problem by priming the image schema of that relation?
  46. Basic Design •  Notes – Temporal intervals were treated as conditions

    •  Showed the base image for –  400, 500, 600, 700, or 800 ms – Design was across participants
  47. Priming •  Experiment described to be about “multi- tasking” – 

    Used 2 computers •  One for the analogy task, one for priming •  Participants alternated between them –  The priming computer was described as a “ball counting task” –  Dots would blink on an off in an alternating pattern •  10 dots would appear in total •  A random number would be red •  Participants were told to count the red dots only
  48. Priming cont. •  The dots were aligned so that the

    eye movements required to track them would be congruent or incongruent with the image schema of the verb directly following –  Participants were assigned to either a congruent or incongruent condition –  Resulted in a 5X2 factorial design 400   500   600   700   800   Congruent  Prime   Incongruent  Prime  
  49. Method •  Participants completed 8 training cycles –  1 cycle

    involved one ball counting task followed by 1 analogy problem –  No priming (verbs used did not show strong image schemas) –  4 were crossmappings, 4 were not •  If a participant successfully completed the training they moved onto the test trials –  24 test cycles –  Horizontal and vertical verbs were depicted with crossmappings, while fillers were not
  50. Results Condi+on:      F(1,  225)=48.332,  p<.01   Presenta+on  Time:

         F(4,  225)=22.107,  p<.01   Condi+on:      F(1,  225)=49.968,  p<.01   Presenta+on  Time:      F(4,  225)=29.741,  p<.01  
  51. Results Congruent   Incongruent   Planned   Comparison   400

      M=10.38,   SD=2.64   M=7.85,   SD=3.133     t(21,20)=2.804,   p<.01   500   M=10.41,   SD=2.26   M=7.77,   SD=3.94     t(22,22)=2.723,   p=.01   600   M=11.90,   SD=1.92   M=8.91,   SD=3.24     t(20,22)=3.598.   p<.01   700   M=11.56,   SD=1.92   M=9.13,   SD=3.52     t(25,24)=2.991,   p<.01   800   M=12.17,   SD=1.74   M=9.68,   SD=2.84     t(24,25)=3.715,   p<.01  
  52. Results Congruent   Incongruent   Planned   Comparison   400

      M=4.71,   SD=2.43   M=6.75,   SD=2.77     t(21,20)=2.504,   p<.017   500   M=4.18,   SD=2.06   M=6.86,   SD=3.20     t(22,22)=3.307,   p<.01   600   M=3.25,   SD=1.68   M=6.00,   SD=3.02     t(20,22)=3.591.   p<.01   700   M=3.44,   SD=1.96   M=5.58,   SD=3.27     t(25,24)=2.769,   p<.01   800   M=3.00,   SD=1.64   M=5.23   SD=2.49   t(24,25=3.703,   p<.01  
  53. Discussion •  Priming complex relations is possible •  Visuospatial priming

    appears to affect the presentation time required for completing a relational crossmapping –  In other words, it doesn’t just affect how quickly you can complete the problem, but it can affect whether you complete it at all •  A relation’s features are important on an ongoing basis •  Open Question: –  How does the priming occur?
  54. Experiment 3 •  Very Generally: To determine what causes the

    priming effects observed in the previous experiments •  More specifically: To prime an image schema and the visual attention needed to view a stimulus in opposite directions, and observe which one overpowers the other
  55. Participants •  Data from 70 participants were collected through UC

    Merced’s SONA system – The data from 6 participants were thrown due to their inability to sufficiently complete the task
  56. Analogy Stimuli •  Very similar to that used in Experiment

    2 •  Capable of being pictorially expressed contrary to image schema •  Differed only on the relations used and presentation time (only 400ms)
  57. Analogy Stimuli Ver;cal   Horizontal   Fillers   Training  

    Burying   Chasing   Kissing   Riding   Climbing   Giving   Playing-­‐with   Talking   Drilling   Hun+ng   Sheltering   Feeding   Launching   Poin+ng   Brushing   Driving   Reaching   Pulling   Opening   Cooking   Performing-­‐for   Scolding  
  58. Priming •  Exactly the same as that found in Experiment

    2 •  Only difference was that it could be – Congruent with representation and incongruent with attention – Incongruent with representation and congruent with attention
  59. Results Between  subjects  t  test  with  a   correc+on  for

     unequal  variances      t(61.60)=2.23,  p<.05     Between  subjects  t  test  with  a   correc+on  for  unequal  variances      t(59.50)=2.83,  p<.01    
  60. Discussion •  Attention-based priming at the very least overpowers the

    content-based priming •  A few possible explanations – Non-prototypical representation is more difficult to recognize – Representational priming of relations isn’t possible •  Role of attention in relational priming still needs to be explored
  61. Relational Recognition •  Rela+onal  scenes  oNen  have  many  rela+ons  

      •  Some  rela+ons  are  complementary     – E.g,.  feeding  versus  ea+ng   •  Why  no+ce  one  over  the  others?  (Livins  &   Doumas,  2015)  
  62. Experiment 4 •  Overall Goal: To explore the relationship between

    visual attention and relational recognition •  Specific Goal: To determine whether the first item of fixation shapes relational recognition in pictorial scenes – First look for correlation – Second look for cause
  63. Experiment 4a Participants •  Data from 58 participants were collected

    through UC Merced’s SONA system – The data from 2 more participants were collected and thrown out due to a poor eye- tracking lock  
  64. Materials •  21 pictorial scenes like those found in Experiments

    2 and 3 •  Key relations could be described differently, based on which item was bound to the actor role – Key items centered
  65. Materials cont. •  Filler items had a single prominent relation

    – Key items not centered •  Image direction counterbalanced across participants
  66. Relations Possible Relation 1 Possible Relation 2 Objects Used Chasing

    Escaping Boy, Cat Talking Listening Woman1, Woman2 Lifting Hanging Woman, Monkey Hunting Escaping Man, Elephant Kicking Cowering Boy, Dog Showing Watching Boy, Woman Dropping Falling Woman, Baby Pulling Riding Boy, Dog Eating Feeding Mother, Child Pushing Riding Girl, Boy
  67. Method •  Images presented with a text box below • 

    Participants asked to type in the relation that they thought was most important •  A single training trial •  Eyes tracked throughout task
  68. Results •  Participant answers coded for “actor” or “patient” based

    with reference to one of the primary relations – Only “actor” and “patient” answers were allowed •  Eye tracking data was used to find the items of first fixation
  69. Results •  Mixed effects logistic model – Criterion variable: actor/patient orientation

    of answer – Predictor variable: first fixation – Random factors: participant, items When tested against a null model, (χ(1)=3.926, p<.05)
  70. Discussion •  There seems to be a relationship between fixation

    and relational recognition •  But is it...
  71. Discussion •  There seems to be a relationship between fixation

    and relational recognition •  But is it… causal?
  72. Experiment b Participants •  Data from 132 participants were collected

    through UC Merced’s SONA system – The data from 4 more participants were collected and thrown out due to a poor eye- tracking lock  
  73. Priming •  Exploited eye-tracker’s calibration process •  Directed visual attention

    to a particular item for first fixation •  Counterbalanced primed item and image presentation
  74. Results •  Mixed effects multinomial logistic model – Criterion variable: actor/patient/neutral

    orientation of answer – Predictor variable: first fixation – Random factors: participant, items When tested against a null model, (χ(1)=35.343, p<.01
  75. Discussion •  Relational recognition can be shaped by first fixation

    •  Visual attention is related to relational recognition •  This is not just the case for spatial relationships (Franconeri et al., 2012) •  These findings are consistent with DORA
  76. Broader Discussion •  Relational priming is not only possible but

    reasonably automatic •  Visuospatial cues are sufficient to evoke it •  Visual attention is a mechanism important to this process
  77. Implications •  Certain models of relational cognition become more likely

    – E.g., the DORA model •  Inconsistencies in the literature might be explained by differences in directed attention  
  78. Example •  Bassock et al., 2008 – Could have primed the

    semantics of “going together” – Could have drawn attention to the addition sign
  79. Open Questions •  How attention affect relational cognition – Primes movements

    which make noticing one over the others easier – Primes attention to a relation which activates its representational content – Primes a strategy
  80. Future Directions •  Relational recognition in more detail (Livins &

    Doumas, 2015) •  Better understand the relationship between visuospatial processing and relational reasoning •  Use of priming in more applied contexts – Math-learning
  81. Acknowledgements Alex  Doumas   University  of   Edinburgh,  School  of

      Philosophy,   Psychology,  and   Language  Sciences   David  Noelle   UC  Merced   Department  of   Cogni+ve  and   Informa+on  Sciences   Rick  Dale   UC  Merced   Department  of   Cogni+ve  and   Informa+on   Sciences   And….    Teenie  Matlock    Michael  Spivey    Lindsay  Richland    David  Landy    Till  Bergmann          Aaron  Hamer    Andrew  Hill    Alexandria  Pabst    Jose  Balsells    Ashley  Miller      
  82. Interpreting Exp 1 •  After testing was complete, we also

    debriefed each participant and asked them – 1) What rule they learned. •  Participants were generally able to explicitly state their rule – 2) What they thought the experiment was about. •  No one made an explicit connection between the priming and the task
  83. Experiment 2B •  Adults prefer reasoning based on relations to

    reasoning based on features when circumstances are ideal (Markman & Gentner, 1983) •  Relational reasoning is easily disrupted though –  It breaks down when working memory is taxed (Halford et al, 1998; Waltz et al., 1999) –  It breaks down with some times of brain damage or distraction (Viskontas et al., 2004) –  It breaks down under stress (Vendetti et al., 2012) •  It can be hard to quantify these things •  Time pressure may be related to stress, and may interrupt relational reasoning, and is easily quantifiable
  84. Item Analysis Ex3 •  t(9) = 3.073, p<.05 •  Congruently

    primed for attention stimuli – Mean = 52.59 – SD = 38.24 •  Incongruently primed for attention stimuli – M = 40.00 – SD = 29.08
  85. Experiment 4 Filler Items Primary Relation Objects Used Brushing Girl,

    Hair Cooking Man, Food Fighting Boy1, Boy2 Hoisting Girl, Monkey Kissing Girl, Dog Opening Girl, Gift Pouring Boy, Water Reaching Man, Baby Scolding Woman, Girl Towing Tow-Truck, Car