Slide 1

Slide 1 text

Dr. Javier Gonzalez-Sanchez [email protected] www.javiergs.info o ffi ce: 14 -227 CSC 570 Current Topics in Computer Science Applied Affective Computing Lecture 07. Eye Tracking

Slide 2

Slide 2 text

Homework

Slide 3

Slide 3 text

Reading 3

Slide 4

Slide 4 text

Eyes

Slide 5

Slide 5 text

How it Works 5 https://connect.tobii.com/s/article/How-do-Tobii-eye-trackers-work

Slide 6

Slide 6 text

Eye 6

Slide 7

Slide 7 text

Eye 7 Timestamp GPX GPY Pupil Left Pupil Right 101124162405582 636 199 2.759313 2.88406 101124162405599 641 207 2.684893 2.855817 101124162405615 659 211 2.624458 2.903861 101124162405632 644 201 2.636186 2.916132 101124162405649 644 213 2.690685 2.831013 101124162405666 628 194 2.651784 2.869714 101124162405682 614 177 2.829281 2.899828 101124162405699 701 249 2.780344 2.907665 101124162405716 906 341 2.853761 2.916398 101124162405732 947 398 2.829427 2.889944 101124162405749 941 400 2.826602 2.881179 101124162405766 938 403 2.78699 2.87948 101124162405782 937 411 2.803387 2.821803 101124162405799 934 397 2.819166 2.871547 101124162405816 941 407 2.811687 2.817927 101124162405832 946 405 2.857419 2.857427 101124162405849 0 0 -1 -1

Slide 8

Slide 8 text

Eye 8 30 o 60 frames per second 30 o 60 inferences per second 1,800 o 3,600 values per minute 108,000 o 216, 000 values per hour

Slide 9

Slide 9 text

Eye 9

Slide 10

Slide 10 text

Affect Recognition 10 BCI and Gaze Points engagement

Slide 11

Slide 11 text

Affect Recognition 11 BCI and Gaze Points frustration

Slide 12

Slide 12 text

Affect Recognition 12 BCI and Gaze Points engagement

Slide 13

Slide 13 text

Affect Recognition 13 BCI and Gaze Points frustration

Slide 14

Slide 14 text

Visual Attention

Slide 15

Slide 15 text

What captures attention? • We only perceive a fr a ction of stimuli th a t enter our consciousness (Mor a n & Desimone, 1985) • M a ny stimuli enter our br a in without being detected consciously • [Surviv a l] w a s predic a ted on the a bility to e ff iciently loc a te critic a lly import a nt events in the surroundings. (Öhm a n, Flykt, & Esteves, 2001, p. 466). • There a re br a in regions th a t monitored the surrounding environment for critic a l stimuli (Cosmides & Tooby, 2013, p. 205). • We a re more likely to fe a r events a nd situ a tions th a t provided thre a ts to the surviv a l of our a ncestors, such a s potenti a lly de a dly pred a tors, heights, a nd wide open sp a ces, th a n to fe a r the most frequently encountered potenti a lly de a dly objects in our contempor a ry environment (Öhm a n & Minek a , 2001, p. 483) 15

Slide 16

Slide 16 text

1. Salience: Color, Dimension, Orientation, Size 16 https://www.kolend a .io/guides/visu a l- a ttention

Slide 17

Slide 17 text

2. Motion: onset, looming, unpredictable, depicted 17

Slide 18

Slide 18 text

2. Motion: capacity, body, natural, 18

Slide 19

Slide 19 text

3. Agents: faces, bodies, animals 19

Slide 20

Slide 20 text

4. Spatial Cues: Eye gaze, pointing, arrows 20

Slide 21

Slide 21 text

4. Spatial Cues: directional words 21

Slide 22

Slide 22 text

5. High Arousal: Threat 22

Slide 23

Slide 23 text

5. High Arousal: Threat 23 (Algom, Chajut, & Lev, 2004).

Slide 24

Slide 24 text

5. High Arousal: Sex 24

Slide 25

Slide 25 text

6. Unexpectedness: Novelty 25

Slide 26

Slide 26 text

7. Self-relevance: your name, your face, 26 Faces are equally as powerful as names (Tacikowski & Nowicka, 2010).

Slide 27

Slide 27 text

8. Goal-relevant: no-goal • People a re more likely to notice stimuli when they don’t h a ve a n a ctive go a l. Their cognitive lo a d is lower, which le a ves sp a re room for a ttention (C a rtwright- Finch & L a vie, 2007). • 27

Slide 28

Slide 28 text

8. Goal-relevant: goal-directed 28 https://www.kolend a .io/guides/visu a l- a ttention

Slide 29

Slide 29 text

Thoughts?

Slide 30

Slide 30 text

No content

Slide 31

Slide 31 text

Eye Tracking

Slide 32

Slide 32 text

JS 32 Do you know JavaScript?

Slide 33

Slide 33 text

WebGazer • https://webg a zer.cs.brown.edu • eye tr a cking libr a ry using common webc a ms to infer the eye-g a ze loc a tions of web visitors on a p a ge in re a l-time. • written in J a v a Script • c a n be integr a ted into a website • runs entirely in the client browser, so no video d a t a needs to be sent to a server, a nd it requires the user's consent to a ccess their webc a m. 33

Slide 34

Slide 34 text

Calibration • https://github.com/brownhci/WebGazer/blob/master/www/calibration.html • https://webgazer.cs.brown.edu/calibration.html 34

Slide 35

Slide 35 text

Local Forage • Asynchronous data store with a simple API • Allows developers to store many types of data instead of just strings. 35

Slide 36

Slide 36 text

Template
webgazer.resume(); webgazer.setGazeListener(function(data, elapsedTime) { if (data != null) { var x = data.x; var y = data.y; document.getElementById("gazeData").innerHTML = "Gaze coordinates: x=" + x + ", y=" + y; } }).begin(); 36

Slide 37

Slide 37 text

Template html, body {height: 100%;margin: 0; padding: 0; } table {width: 100%;height: 100%;border-collapse: collapse;} td {border: 1px solid black;} Cell 1 Cell 2 Cell 3 Cell 4 Cell 5 Cell 6 Cell 7 Cell 8 Cell 9 37

Slide 38

Slide 38 text

Example 38

Slide 39

Slide 39 text

Homework

Slide 40

Slide 40 text

Homework 40 For a Low-Cost, Low-Resolution approach, Could it be possible to After a while To cluster? What could be the result of doing that? Which approach could work better (K-mean, DBSCAN, EM)?

Slide 41

Slide 41 text

Homework 41 How to connect this info with EEG? timestamp in JS? Save local data in JS? Stream to MQTT broker

Slide 42

Slide 42 text

Questions 42

Slide 43

Slide 43 text

CSC 570 Applied Affective Computing Javier Gonzalez-Sanchez, Ph.D. [email protected] Spring 2025 Copyright. These slides can only be used as study material for the class CSC 570 at Cal Poly. They cannot be distributed or used for another purpose.