Dr. Javier Gonzalez-Sanchez
[email protected]
www.javiergs.info
o
ffi
ce: 14 -227
CSC 570
Current Topics in Computer Science
Applied Affective Computing
Lecture 08. Eye Tracking Hands-on Lab
Slide 2
Slide 2 text
Previously
Slide 3
Slide 3 text
How it Works
3
https://connect.tobii.com/s/article/How-do-Tobii-eye-trackers-work
Slide 4
Slide 4 text
Eye
4
Slide 5
Slide 5 text
(
Slide 6
Slide 6 text
No content
Slide 7
Slide 7 text
Clustering
• Unsupervised Learning
• Clustering is the task of dividing a population (data points) into a number of groups such
that data points in the same groups are similar
7
Slide 8
Slide 8 text
Algorithms
• K-Means - distance between points. Minimize square-error criterion.
• DBSCAN (Density-Based Spatial Clustering of Applications with Noise) - distance
between nearest points.
• Simple EM (Expectation Maximization) is
fi
nding likelihood of an observation belonging to
a cluster(probability). Maximize log-likelihood criterion
8
Slide 9
Slide 9 text
Algorithm: K-Means
9
Slide 10
Slide 10 text
Similarity
• One of the simplest w
a
ys to c
a
lcul
a
te the dist
a
nce between two fe
a
ture vectors is to use Euclidean Distance.
• Other options: Minkowski dist
a
nce, M
a
nh
a
tt
a
n dist
a
nce, H
a
mming dist
a
nce, Cosine dist
a
nce, …
10
Slide 11
Slide 11 text
Algorithm: K-means
• K-Me
a
ns begins with k r
a
ndomly pl
a
ced centroids. Centroids
a
re the center points of the
clusters.
• Iter
a
tion:
• Assign e
a
ch existing d
a
t
a
point to its ne
a
rest centroid
• Move the centroids to the
a
ver
a
ge loc
a
tion of points
a
ssigned to it.
• Repe
a
t iter
a
tions until the
a
ssignment between multiple consecutive iter
a
tions stops ch
a
nging
11
Slide 12
Slide 12 text
K-means Problems
• K-Me
a
ns clustering m
a
y cluster loosely related observations together. Every
observ
a
tion becomes
a
p
a
rt of some cluster eventu
a
lly, even if the observ
a
tions
a
re
sc
a
ttered f
a
r
a
w
a
y in the vector sp
a
ce
• Clusters depend on the me
a
n v
a
lue of cluster elements; e
a
ch d
a
t
a
point pl
a
ys
a
role in
forming the clusters. A slight ch
a
nge in d
a
t
a
points might
a ff
ect the clustering outcome.
• Another ch
a
llenge with k-me
a
ns is th
a
t you need to specify the number of clusters (“k”)
in order to use it. Much of the time, we won’t know wh
a
t
a
re
a
son
a
ble k v
a
lue is a priori.
12
Slide 13
Slide 13 text
Code: Record
13
https://github.com/javiergs/Medium/tree/main/Clustering
DBSCAN
• The
a
lgorithm proceeds by
a
rbitr
a
rily picking up
a
point in the d
a
t
a
set
• If there
a
re
a
t le
a
st N points within
a
r
a
dius of E to the point, then we consider
a
ll these
points to be p
a
rt of the s
a
me cluster.
• Repe
a
t until
a
ll points h
a
ve been visited
16
Slide 17
Slide 17 text
K-means VS. DBSCAN
• wek
a
.clusterers: These
a
re clustering
a
lgorithms, including K-means, CLOPE, Cobweb,
DBSCAN hier
a
rchic
a
l clustering,
a
nd F
a
rthestFirst.
17
Slide 18
Slide 18 text
)
Slide 19
Slide 19 text
Hands-on Lab
Slide 20
Slide 20 text
No content
Slide 21
Slide 21 text
No content
Slide 22
Slide 22 text
No content
Slide 23
Slide 23 text
No content
Slide 24
Slide 24 text
No content
Slide 25
Slide 25 text
No content
Slide 26
Slide 26 text
No content
Slide 27
Slide 27 text
No content
Slide 28
Slide 28 text
No content
Slide 29
Slide 29 text
No content
Slide 30
Slide 30 text
No content
Slide 31
Slide 31 text
No content
Slide 32
Slide 32 text
No content
Slide 33
Slide 33 text
No content
Slide 34
Slide 34 text
No content
Slide 35
Slide 35 text
Questions
35
Slide 36
Slide 36 text
CSC 570 Applied Affective Computing
Javier Gonzalez-Sanchez, Ph.D.
[email protected]
Spring 2025
Copyright. These slides can only be used as study material for the class CSC 570 at Cal Poly.
They cannot be distributed or used for another purpose.