Upgrade to Pro — share decks privately, control downloads, hide ads and more …

CSC486 Lecture 12

Javier Gonzalez-Sanchez
February 25, 2025
23

CSC486 Lecture 12

Human-Computer Interaction
Embodiment
(202502)

Javier Gonzalez-Sanchez

February 25, 2025
Tweet

Transcript

  1. Dr. Javier Gonzalez-Sanchez [email protected] www.javiergs.info o ffi ce: 14 -227

    CSC 486 Human-Computer Interaction Lecture 12. Embodiement
  2. Embodiment • Inter a ction th a t involves the

    whole body a s a medium for eng a gement with digit a l environments. • Theoretic a l b a sis: physic a l sp a ce, a nd soci a l context in sh a ping hum a n inter a ctions • Immersive Computing, (VR/AR) - users feel fully present • A ff ective Computing - Physical actions c a n gener a te emotion a l st a tes; f a ci a l expressions can enh a nce soci a l a nd emotion a l eng a gement also • The Sensorimotor Loop: body’s movement provides feedback that shapes perception and decision-making. 4
  3. Motion Tracking • Met a Quest uses a combin a

    tion of hand tracking, a nd AI-based body estimation to tr a ck the user’s movements. The system prim a rily focuses on head, hand, and body tracking, with emerging techniques for full-body tr a cking. • he a dset itself cont a ins a ll necess a ry sensors to tr a ck motion • Multiple outw a rd-f a cing c a mer a s th a t sc a n the environment a nd detect ch a nges in position. • Uses Simult a neous Loc a liz a tion a nd M a pping (SLAM) a lgorithms to cre a te a m a p of the sp a ce a nd tr a ck the user’s movement within it. • 6 Degrees of Freedom (6DoF): Met a Quest c a n tr a ck position (X, Y, Z) a nd rot a tion (pitch, y a w, roll) of the he a dset a nd controllers in re a l time. 5
  4. Hands and Hand-Gestures • Infr a red C a mer

    a s & AI: The he a dset’s c a mer a s tr a ck h a nd position, f inger movements, a nd gestures using infr a red light a nd AI-b a sed computer vision models. • Skeleton Model Estim a tion: The system identi f ies key points (knuckles, f ingertips, p a lm center) a nd reconstructs a 3D model of the h a nds. • Gestures a s Inputs: Recognizes pinches, swipes, open/closed h a nds, pointing, a nd other movements. • 6
  5. Body Tracking (Estimation) • Met a Quest devices do not

    h a ve built-in full-body tr a cking, but Met a introduced AI- b a sed body tr a cking solutions. • Upper-Body Estim a tion: Using h a nd tr a cking + he a d movement, the system infers the position of shoulders, elbows, a nd torso. • Inverse Kinem a tics (IK): AI predicts the position of hidden body p a rts (like elbows) b a sed on h a nd a nd he a d movement p a tterns. • VR a pplic a tions use IK models to simul a te full-body motion with limited tr a cking points. 7
  6. Body Tracking (Estimation) • No direct leg tr a cking

    (lower-body movements a re not n a tively c a ptured). • Uses AI inference to a pproxim a te w a lking a nd sitting poses. • Some VR a pps require extern a l tr a ckers (like Vive tr a ckers) or Kinect-like c a mer a s for full-body motion. • Extern a l a ccessories: w a ist a nd foot sensors for more precise tr a cking. 8
  7. Inverse Kinematics (IK) in Motion Tracking • Forw a rd

    Kinem a tics (FK): Given the a ngles of joints (like a n elbow or knee), FK c a lcul a tes the position of the end-e ff ector (like a h a nd or foot). • Inverse Kinem a tics (IK): The opposite of FK—given the position of the end-e ff ector, IK c a lcul a tes the joint a ngles needed to a chieve th a t position. • Elbows a nd shoulders (using h a nd positions a nd movement). • Torso positioning (using rel a tive h a nd a nd he a d positioning). 9
  8. MQTT Data { “leftEye”:{"x":-0.4216550588607788,"y":0.8787311911582947,"z":-0.00456150621175766}, “rightEye":{"x":-0.3755757808685303,"y":0.8756504058837891,"z":0.04438880831003189}, “leftEyeGaze":{"x":0.050619591027498248,"y":-0.0809454470872879,"z":0.9954323172569275}, “rightEyeGaze":{"x":0.050619591027498248,"y":-0.0809454470872879,"z":0.9954323172569275}, “eyeFixationPoint":{"x":0.11886614561080933,"y":-0.13097167015075684,"z":2.974684476852417}, “leftHand”:{"x":0.0,"y":0.0,"z":0.0}, "rightHand":{"x":0.0,"y":0.0,"z":0.0},

    “cube":{"x":-0.5114021897315979,"y":1.5798050165176392,"z":0.024640535935759546}, “head":{"x":-0.7167978286743164,"y":0.8024232983589172,"z":0.17002606391906739}, “torso":{"x":-0.6404322385787964,"y":0.5270168781280518,"z":0.035430606454610828}, “leftFoot":{"x":-0.8061407804489136,"y":-0.16039752960205079,"z":0.25339341163635256}, “rightFoot":{"x":-0.5946151614189148,"y":-0.15849697589874268,"z":0.33175137639045718}, “hips":{"x":-0.6485552787780762,"y":0.33673161268234255,"z":0.0795457512140274}, “leftArmUp":{"x":-0.8079588413238525,"y":0.7046946287155151,"z":0.0354776531457901}, “lefArmLow":{"x":-0.6874216794967651,"y":0.5375530123710632,"z":-0.05098365247249603}, “rightArmUp":{"x":-0.5440698266029358,"y":0.7054383754730225,"z":0.16330549120903016}, “rightArmLow":{"x":-0.6227755546569824,"y":0.5135259032249451,"z":0.2464602291584015}, “leftWrist":{"x":-0.5440698266029358,"y":0.7054383754730225,"z":0.16330549120903016}, “rightWrist":{"x":-0.6227755546569824,"y":0.5135259032249451,"z":0.2464602291584015} } 13
  9. Lab

  10. Body Input Action on a Java Swing Application 18 •Look

    left Move the circle left • Look right Move the circle right • Look up Move the circle up • Look down Move the circle down • Raise left hand Change circle color to red (e.g., “select”) • Raise right hand Change circle color to blue (e.g., “highlight”) • Lean forward (bend down) Shrink the circle (closer interaction) • Stand up straight Expand the circle (broader interaction)
  11. CSC 486 Human-Computer Interaction Javier Gonzalez-Sanchez, Ph.D. [email protected] Winter 2025

    Copyright. These slides can only be used as study material for the class CSC 486 at Cal Poly. They cannot be distributed or used for another purpose.