Slide 1

Slide 1 text

Dr. Javier Gonzalez-Sanchez javiergs@calpoly.edu www.javiergs.info o ffi ce: 14 -227 CSC 486 Human-Computer Interaction Lecture 12. Embodiement

Slide 2

Slide 2 text

Embodiment • Inter a ction th a t involves the whole body a s a medium for eng a gement with digit a l environments. • Theoretic a l b a sis: physic a l sp a ce, a nd soci a l context in sh a ping hum a n inter a ctions • Immersive Computing, (VR/AR) - users feel fully present • A ff ective Computing - Physical actions c a n gener a te emotion a l st a tes; f a ci a l expressions can enh a nce soci a l a nd emotion a l eng a gement also • The Sensorimotor Loop: body’s movement provides feedback that shapes perception and decision-making. 4

Slide 3

Slide 3 text

Motion Tracking • Met a Quest uses a combin a tion of hand tracking, a nd AI-based body estimation to tr a ck the user’s movements. The system prim a rily focuses on head, hand, and body tracking, with emerging techniques for full-body tr a cking. • he a dset itself cont a ins a ll necess a ry sensors to tr a ck motion • Multiple outw a rd-f a cing c a mer a s th a t sc a n the environment a nd detect ch a nges in position. • Uses Simult a neous Loc a liz a tion a nd M a pping (SLAM) a lgorithms to cre a te a m a p of the sp a ce a nd tr a ck the user’s movement within it. • 6 Degrees of Freedom (6DoF): Met a Quest c a n tr a ck position (X, Y, Z) a nd rot a tion (pitch, y a w, roll) of the he a dset a nd controllers in re a l time. 5

Slide 4

Slide 4 text

Hands and Hand-Gestures • Infr a red C a mer a s & AI: The he a dset’s c a mer a s tr a ck h a nd position, f inger movements, a nd gestures using infr a red light a nd AI-b a sed computer vision models. • Skeleton Model Estim a tion: The system identi f ies key points (knuckles, f ingertips, p a lm center) a nd reconstructs a 3D model of the h a nds. • Gestures a s Inputs: Recognizes pinches, swipes, open/closed h a nds, pointing, a nd other movements. • 6

Slide 5

Slide 5 text

Body Tracking (Estimation) • Met a Quest devices do not h a ve built-in full-body tr a cking, but Met a introduced AI- b a sed body tr a cking solutions. • Upper-Body Estim a tion: Using h a nd tr a cking + he a d movement, the system infers the position of shoulders, elbows, a nd torso. • Inverse Kinem a tics (IK): AI predicts the position of hidden body p a rts (like elbows) b a sed on h a nd a nd he a d movement p a tterns. • VR a pplic a tions use IK models to simul a te full-body motion with limited tr a cking points. 7

Slide 6

Slide 6 text

Body Tracking (Estimation) • No direct leg tr a cking (lower-body movements a re not n a tively c a ptured). • Uses AI inference to a pproxim a te w a lking a nd sitting poses. • Some VR a pps require extern a l tr a ckers (like Vive tr a ckers) or Kinect-like c a mer a s for full-body motion. • Extern a l a ccessories: w a ist a nd foot sensors for more precise tr a cking. 8

Slide 7

Slide 7 text

Inverse Kinematics (IK) in Motion Tracking • Forw a rd Kinem a tics (FK): Given the a ngles of joints (like a n elbow or knee), FK c a lcul a tes the position of the end-e ff ector (like a h a nd or foot). • Inverse Kinem a tics (IK): The opposite of FK—given the position of the end-e ff ector, IK c a lcul a tes the joint a ngles needed to a chieve th a t position. • Elbows a nd shoulders (using h a nd positions a nd movement). • Torso positioning (using rel a tive h a nd a nd he a d positioning). 9

Slide 8

Slide 8 text

Eye Tracking - Special Mention! • G a ze Direction! 10

Slide 9

Slide 9 text

Our Data

Slide 10

Slide 10 text

Good Idea 12

Slide 11

Slide 11 text

MQTT Data { “leftEye”:{"x":-0.4216550588607788,"y":0.8787311911582947,"z":-0.00456150621175766}, “rightEye":{"x":-0.3755757808685303,"y":0.8756504058837891,"z":0.04438880831003189}, “leftEyeGaze":{"x":0.050619591027498248,"y":-0.0809454470872879,"z":0.9954323172569275}, “rightEyeGaze":{"x":0.050619591027498248,"y":-0.0809454470872879,"z":0.9954323172569275}, “eyeFixationPoint":{"x":0.11886614561080933,"y":-0.13097167015075684,"z":2.974684476852417}, “leftHand”:{"x":0.0,"y":0.0,"z":0.0}, "rightHand":{"x":0.0,"y":0.0,"z":0.0}, “cube":{"x":-0.5114021897315979,"y":1.5798050165176392,"z":0.024640535935759546}, “head":{"x":-0.7167978286743164,"y":0.8024232983589172,"z":0.17002606391906739}, “torso":{"x":-0.6404322385787964,"y":0.5270168781280518,"z":0.035430606454610828}, “leftFoot":{"x":-0.8061407804489136,"y":-0.16039752960205079,"z":0.25339341163635256}, “rightFoot":{"x":-0.5946151614189148,"y":-0.15849697589874268,"z":0.33175137639045718}, “hips":{"x":-0.6485552787780762,"y":0.33673161268234255,"z":0.0795457512140274}, “leftArmUp":{"x":-0.8079588413238525,"y":0.7046946287155151,"z":0.0354776531457901}, “lefArmLow":{"x":-0.6874216794967651,"y":0.5375530123710632,"z":-0.05098365247249603}, “rightArmUp":{"x":-0.5440698266029358,"y":0.7054383754730225,"z":0.16330549120903016}, “rightArmLow":{"x":-0.6227755546569824,"y":0.5135259032249451,"z":0.2464602291584015}, “leftWrist":{"x":-0.5440698266029358,"y":0.7054383754730225,"z":0.16330549120903016}, “rightWrist":{"x":-0.6227755546569824,"y":0.5135259032249451,"z":0.2464602291584015} } 13

Slide 12

Slide 12 text

Your Java Desktop Application 14

Slide 13

Slide 13 text

Demo

Slide 14

Slide 14 text

Questions 16

Slide 15

Slide 15 text

Lab

Slide 16

Slide 16 text

Body Input Action on a Java Swing Application 18 •Look left Move the circle left • Look right Move the circle right • Look up Move the circle up • Look down Move the circle down • Raise left hand Change circle color to red (e.g., “select”) • Raise right hand Change circle color to blue (e.g., “highlight”) • Lean forward (bend down) Shrink the circle (closer interaction) • Stand up straight Expand the circle (broader interaction)

Slide 17

Slide 17 text

CSC 486 Human-Computer Interaction Javier Gonzalez-Sanchez, Ph.D. javiergs@calpoly.edu Winter 2025 Copyright. These slides can only be used as study material for the class CSC 486 at Cal Poly. They cannot be distributed or used for another purpose.