Lock in $30 Savings on PRO—Offer Ends Soon! ⏳

UIST2024 Papers Quick Read

Bluemo
October 16, 2024
54

UIST2024 Papers Quick Read

Bluemo

October 16, 2024
Tweet

Transcript

  1. MouthIO: Fabricating Customizable Oral User Interfaces with Integrated Sensing and

    Actuation Jiang, Yijing and Kleinau, Julia and Eckroth, Till Max and Hoggan, Eve and Mueller, Stefanie and Wessely, Michael Background: Customizable oral user interfaces are largely unexplored. Purpose: The paper aims to create affordable, safe, intraoral interfaces. System: MouthIO is a flexible PCB housed in a customizable brace for oral use. Evaluation: Tested with three applications, plus a full-day user study for feedback. Result: High wearability and bite-resistant capabilities were confirmed.
  2. Can a Smartwatch Move Your Fingers? Compact and Practical Electrical

    Muscle Stimulation in a Smartwatch Takahashi, Akifumi and Tanaka, Yudai and Tamhane, Archit and Shen, Alan and Teng, Shan-Yuan and Lopes, Pedro Background: Haptics in smartwatches is limited as most actuators can't create strong force-feedback. Purpose: The study aims to develop a practical EMS system for smartwatches to enable finger actuation. System: The proposed solution integrates EMS electrodes into the back of a smartwatch for wrist-based Evaluation: The approach was tested with 1,728 wrist stimulation trials and a user calibration study. Result: The integrated EMS was preferred for its efficiency and social acceptability by all participants.
  3. Power-over-Skin: Full-Body Wearables Powered By Intra-Body RF Energy Kong, Andy

    and Kim, Daehwa and Harrison, Chris Background: Small wearables face challenges due to battery bulk and recharge needs. Purpose: Research aims to solve powering wearables without batteries using the body. System: A method called Power-over-Skin delivers power through the human body. Evaluation: Study and experiments validated the Power- over-Skin system's functionality. Result: Demonstration devices show the potential and effectiveness of the approach.
  4. HandPad: Make Your Hand an On-the- go Writing Pad via

    Human Capacitance Lu, Yu and Ding, Dian and Pan, Hao and Li, Yijie and Zhou, Juntao and Fu, Yongjian and Zhang, Yongzhao and Chen, Yi-Chao and Xue, Guangtao Background: AR devices face challenges of efficient text input solutions. Purpose: HandPad aims to balance portability and efficiency for text input. System: HandPad uses human capacitance to turn hands into input devices. Evaluation: Experiments validate touch localization and handwriting using Bi-LSTM. Result: HandPad achieves high accuracy in keystroke and handwriting recognition.
  5. OptiBasePen: Mobile Base+Pen Input on Passive Surfaces by Sensing Relative

    Base Motion Plus Close- Fender, Andreas and Kari, Mohamed Background: Mobile pens require specialized sensing surfaces, limiting mobility. Purpose: Address pen input device limitations using ordinary surfaces without motion artifacts. System: OptiBasePen senses relative and absolute pen motion using a base and infrared sensors. Evaluation: Prototype integrates high-precision sensors to test on passive surfaces without cameras. Result: OptiBasePen offers drift-free pen input using base+pen concept on ordinary surfaces.
  6. Palmrest+: Expanding Laptop Input Space with Shear Force on Palm-

    Resting Area Yim, Jisu and Bae, Seoyeon and Kim, Taejun and Kim, Sunbum and Lee, Geehyuk Background: The palmrest area is underutilized despite constant contact during typing. Purpose: The study addresses enhancing input capability using the palmrest for better interaction. System: Palmrest+ introduces Shortcut and Joystick techniques for diverse input methods. Evaluation: Experiments compared Palmrest techniques to standard input for performance and simplicity. Result: Palmrest+ exhibited improved speed, reduced gaze shifting, and minimized hand movement.
  7. TouchInsight: Uncertainty-aware Rapid Touch and Text Input for Mixed Reality

    from Egocentric Vision Streli, Paul and Richardson, Mark and Botros, Fadi and Ma, Shugao and Wang, Robert and Holz, Christian Background: Egocentric touch input is challenging due to uncertainties in head-mounted cameras. Purpose: The research aims to robustly detect touch input to improve mixed reality interactions. System: TouchInsight uses a neural network to detect touch from ten fingers with bivariate Gaussian. Evaluation: Tested offline for touch accuracy and online for text entry speed and error rate. Result: TouchInsight achieved 37 WPM with 2.9% error rate in a two-handed text entry task.
  8. Can Capacitive Touch Images Enhance Mobile Keyboard Decoding? Lertvittayakumjorn, Piyawat

    and Cai, Shanqing and Dou, Billy and Ho, Cedric and Zhai, Shumin Background: Mobile keyboards focus on touch centroid ignoring other spatial signals. Purpose: The research aims to test if heatmaps enhance tap decoding accuracy. System: Machine-learning models use centroids and/or heatmaps for interpretation. Evaluation: Models were evaluated for contributions of heatmaps to performance. Result: Adding heatmaps reduced error rates by 21.4% and improved user experience.
  9. NotePlayer: Engaging Computational Notebooks for Dynamic Presentation of Analytical Processes

    Ouyang, Yang and Shen, Leixian and Wang, Yun and Li, Quan Background: Tutorial videos enhance understanding of code via Jupyter notebooks. Purpose: Creating these videos is challenging due to needed skills and manual effort. System: NotePlayer automates linking notebook cells to video segments using language models. Evaluation: Formative study on 38 Jupyter videos identified patterns to develop NotePlayer. Result: NotePlayer effectively simplifies video creation for data analysts.
  10. Tyche: Making Sense of PBT Effectiveness Goldstein, Harrison and Tao,

    Jeffrey and Hatfield-Dodds, Zac and Pierce, Benjamin C. and Head, Andrew Background: Software developers use automated methods like PBT for code correctness. Purpose: PBT evaluations are hard, needing better methods to assess test inputs effectiveness. System: We propose Tyche, an interface to help developers understand PBT effectiveness. Evaluation: A self-guided usability study showed Tyche boosts test assessment accuracy. Result: Tyche's views aid developers in accurately gauging software testing effectiveness.
  11. CoLadder: Manipulating Code Generation via Multi-Level Blocks Yen, Ryan and

    Zhu, Jiawen Stefanie and Suh, Sangho and Xia, Haijun and Zhao, Jian Background: Exploring programmer strategies when using LLMs for coding is crucial. Purpose: The research aids programmers in better utilizing LLMs for coding tasks. System: Introducing CoLadder for task decomposition and direct code manipulation. Evaluation: Conducted a user study with 12 experienced programmers to assess CoLadder. Result: CoLadder improves programmers' flexibility and code evaluation skills.
  12. SQLucid: Grounding Natural Language Database Queries with Interactive Explanations Tian,

    Yuan and Kummerfeld, Jonathan K. and Li, Toby Jia-Jun and Zhang, Tianyi Background: Machine learning advances have improved database interfaces but reliability issues persist. Purpose: SQLucid aims to bridge the gap between non- expert users and complex database queries. System: It combines visual feedback, intermediate results, and step-by-step editable SQL explanations. Evaluation: Two user studies and a quantitative experiment were conducted to test SQLucid's Result: SQLucid improves task completion accuracy and user confidence over existing interfaces.
  13. picoRing: battery-free rings for subtle thumb-to-index input Takahashi, Ryo and

    Whitmire, Eric and Boldu, Roger and Ng, Shiu and Kienzle, Wolf and Benko, Hrvoje Background: Smart rings are bulky due to battery size, limiting their sensor integration. Purpose: To develop battery-free smart rings for comfortable, subtle finger input. System: Introduce picoRing, using a wristband for energy and signal detection via coils. Evaluation: Test four rings enabling thumb-to-finger gestures with unique passive responses. Result: Tiny ring design achieved with stable 13 cm readout despite bending and metal proximity.
  14. WatchLink: Enhancing Smartwatches with Sensor Add-Ons via ECG Interface Waghmare,

    Anandghan and Chatterjee, Ishan and Iyer, Vikram and Patel, Shwetak Background: Smartwatches lack flexible sensor functionalities beyond built-in suites. Purpose: Enhance smartwatches by using ECG hardware to connect add-on sensors. System: Introduce low-power data communication via existing ECG interfaces. Evaluation: Conduct exploratory tests and design a transmission scheme with commodity parts. Result: Add-on sensors met objectives with low cost and power, enabling personalized wearables.
  15. PrISM-Observer: Intervention Agent to Help Users Perform Everyday Procedures Sensed

    using a Arakawa, Riku and Yakura, Hiromu and Goel, Mayank Background: Omission of steps in everyday tasks can lead to serious issues, especially in dementia. Purpose: The paper aims to create an agent to prevent task errors using smartwatch interventions. System: PrISM-Observer is a real-time, context-aware intervention system using a smartwatch. Evaluation: The system's steps-tracking was validated over three datasets and a cooking task user study. Result: The smartwatch system provided helpful interventions and received positive feedback.
  16. MagneDot: Integrated Fabrication and Actuation Methods of Dot-Based Magnetic Shape

    Displays Sun, Lingyun and Fan, Yitao and Feng, Boyu and Zhang, Yifu and Pan, Deying and Ren, Yiwen and Zhang, Yuyang and Wang, Qi and Tao, Ye and Wang, Guanyun Background: Magnetic soft materials offer fast morphing, but tooling is inaccessible to many. Purpose: Improve accessibility to interactive magnetic displays using cost-effective solutions. System: MagneDot integrates mold-making, extrusion, and magnetization for easy fabrication. Evaluation: Provides a design tool enabling diverse morphing effects through user-generated G-code. Result: Design examples show MagneDot's potential for versatile shape display creations.
  17. CARDinality: Interactive Card-shaped Robots with Locomotion and Haptics using Vibration

    Retnanto, Aditya and Faracci, Emilie and Sathya, Anup and Hung, Yu-Kai and Nakagaki, Ken Background: Interactive robots in card form enhance portability and scalability. Purpose: Solve tangible interaction problems with card- shaped vibrational robots. System: Card robots use vibration for omni-directional locomotion and feedback. Evaluation: The prototype explored card interactions in education and assistive tools. Result: CARDinality is versatile in applications like augmented card playing.
  18. PortaChrome: A Portable Contact Light Source for Integrated Re- Programmable

    Multi-Color Textures Zhu, Yunyi and Honnet, Cedric and Kang, Yixiao and Zhu, Junyi and Zheng, Angelina J and Heinz, Kyle and Tang, Grace and Musk, Luca and Wessely, Michael and Mueller, Stefanie Background: Traditional color-change methods in objects use projectors, limiting interaction. Purpose: This research aims to enable programmable texture changes in everyday objects. System: PortaChrome is a portable light source for applying multi-color textures quick. Evaluation: The study demonstrates examples on textiles and wearables with integrated designs. Result: PortaChrome changes textures 8 times faster than previous methods, within 4 minutes.
  19. Augmented Object Intelligence with XR-Objects Dogan, Mustafa Doga and Gonzalez,

    Eric J and Ahuja, Karan and Du, Ruofei and Cola\c{c}o, Andrea and Lee, Johnny and Gonzalez-Franco, Mar and Kim, David Background: Integrating physical objects as digital entities is a challenge in spatial computing. Purpose: The research aims to equip physical objects with digital interaction capabilities. System: The study introduces AOI using XR-Objects with real-time segmentation and MLLMs. Evaluation: XR-Objects' versatility is demonstrated through use cases and a user study. Result: AOI transcends traditional AI by enabling analog objects to perform digital functions.
  20. Beyond the Chat: Executable and Verifiable Text-Editing with LLMs Laban,

    Philippe and Vig, Jesse and Hearst, Marti and Xiong, Caiming and Wu, Chien-Sheng Background: LLMs in conversational interfaces lack explicit edit traceability. Purpose: Improve document editing by enabling explicit and traceable LLM suggestions. System: InkSync suggests executable edits and mitigates factual errors in document editing. Evaluation: Two usability studies tested InkSync against LLM chat interfaces for effectiveness. Result: InkSync showed more accurate, efficient editing with better user experience than chats.
  21. ScriptViz: A Visualization Tool to Aid Scriptwriting based on a

    Large Movie Database Rao, Anyi and Chou, Jean-Pe"{\i}c and Agrawala, Maneesh Background: Scriptwriters rely on mental visualization and movie references. Purpose: Aid scriptwriting by providing external visualization from a movie database. System: ScriptViz retrieves reference visuals instantly from scripts via a movie database. Evaluation: User evaluation with 15 scriptwriters assessed visualization tool efficacy. Result: ScriptViz offers consistent and diverse visuals aligning with scripts.
  22. SkipWriter: LLM-Powered Abbreviated Writing on Tablets Xu, Zheer and Cai,

    Shanqing and Varma T, Mukund and Venugopalan, Subhashini and Zhai, Shumin Background: LLMs offer new possibilities for efficient text inputs on tablets. Purpose: The research aims to address physically demanding handwriting tasks with LLMs. System: SkipWriter converts abbreviated handwriting to full text using prefixes. Evaluation: A user evaluation showed substantial motor movement reductions while testing. Result: Motor movements reduced by 60% with average typing speeds of 25.78 WPM.
  23. Bluefish: Composing Diagrams with Declarative Relations Pollock, Josh and Mei,

    Catherine and Huang, Grace and Evans, Elliot and Jackson, Daniel and Satyanarayan, Arvind Background: Users face a trade-off between expressive but low-level and high-level limited diagram tools. Purpose: The study aims to resolve the diagramming framework dilemma users face. System: Bluefish is a diagramming tool using declarative and composable relations for flexibility. Evaluation: A diverse example gallery showcases Bluefish across fields like math and physics. Result: Bluefish's relations are shown as effective declarative primitives for diagram creation.
  24. Predicting the Limits: Tailoring Unnoticeable Hand Redirection Offsets in Virtual

    Reality to Individuals' Feick, Martin and Regitz, Kora Persephone and Gehrke, Lukas and Zenner, Andr'{e} and Tang, Anthony and Jungbluth, Tobias Patrick and Rekrut, Maurice and Kr"{u}ger, Antonio Background: Hand Redirection in VR can disrupt user experience if poorly calibrated. Purpose: This study aims to tailor HR offsets to users' perceptual thresholds. System: The paper proposes using movement, eye gaze, and EEG data for personalized HR. Evaluation: 18 participants' data analyzed to identify Below, At, and Above HR thresholds. Result: Results show distinguishing HR At and Above from no HR is feasible.
  25. Modulating Heart Activity and Task Performance using Haptic Heartbeat Feedback:

    A Study Across Four Body Valente, Andreia and Lee, Dajin and Choi, Seungmoon and Billinghurst, Mark and Esteves, Augusto Background: Haptic feedback's effect on heart activity needs understanding for enhancing biofeedback. Purpose: The study aims to identify how vibrotactile feedback influences heart functions. System: Different feedback placements and rates were used to modulate heart activity. Evaluation: A study tested feedback on the chest, wrist, neck, and ankle with two rates. Result: Neck feedback increased heart rate; chest was preferred for user experience.
  26. Augmented Breathing via Thermal Feedback in the Nose Brooks, Jas

    and Mazursky, Alex and Hixon, Janice and Lopes, Pedro Background: Breathing perception is crucial; enhancing it can benefit interactive applications. Purpose: The research aims to augment perceived nasal airflow for improved user interaction. System: The method cools/heats the nose using Peltier elements for enhanced breath perception. Evaluation: A psychophysical study examined the influence of temperature on breathing perception. Result: 90% of trials noted airflow change; only 8% reported temperature change.
  27. Thermal In Motion: Designing Thermal Flow Illusions with Tactile and

    Thermal Interaction Singhal, Yatharth and Honrales, Daniel and Wang, Haokun and Kim, Jin Ryong Background: Illusions enhance thermal perception in VR via tactile motion. Purpose: Explore thermal illusions for enhanced VR interactions. System: Integrate thermal illusions and tactile motion for dynamic sensations. Evaluation: Conducted three experiments focusing on placement and speed thresholds. Result: Revealed promising VR applications with thermal interfaces.
  28. Feminist Interaction Techniques: Social Consent Signals to Deter NCIM Screenshots

    Qiwei, Li and Lameiro, Francesca and Patel, Shefali and Isaula-Reyes, Cristi and Adar, Eytan and Gilbert, Eric and Schoenebeck, Sarita Background: NCIM causes emotional, financial, reputational harm; solution needed. Purpose: Preventing NCIM aims to protect individuals from unauthorized media sharing. System: Hands-Off uses hand gestures to deter non- consensual media screenshots. Evaluation: Lab study shows Hands-Off reduces screenshots by 67% and is user-friendly. Result: Introduced Feminist Interaction Techniques to address societal issues.
  29. Effects of Computer Mouse Lift-off Distance Settings in Mouse Lifting

    Action Kim, Munjeong and Kim, Sunjun Background: Low LoD prevents unintentional movement but may harm tracking stability. Purpose: Study aims to find optimal LoD for minimizing error in gaming. System: Evaluate LoD effects on cursor error and tracking stability across four levels. Evaluation: Conducted a psychophysical experiment assessing perception and performance. Result: There's a trade-off: minimizing motion error affects tracking stability.
  30. DisMouse: Disentangling Information from Mouse Movement Data Zhang, Guanhua and

    Hu, Zhiming and Bulling, Andreas Background: Mouse movement data hold rich but entangled information. Purpose: The study aims to disentangle components in mouse movement data. System: DisMouse disentangles user, task, and noise components using an autoencoder. Evaluation: Evaluations involve three datasets and focus on complementary information capture. Result: DisMouse supports explainable, controllable, and generalised mouse data modelling.
  31. Wheeler: A Three-Wheeled Input Device for Usable, Efficient, and Versatile

    Non-Visual Interaction Islam, Md Touhidul and Sojib, Noushad and Kabir, Imran and Amit, Ashiqur Rahman and Amin, Mohammad Ruhul and Billah, Syed Masum Background: Blind users face challenges navigating complex UI with screen readers. Purpose: This research aims to ease non-visual navigation of complex UIs. System: Wheeler is a mouse-shaped device with three wheels enabling multi-level navigation. Evaluation: A study with 12 blind users tested navigation efficiency with Wheeler versus keyboards. Result: Wheeler showed a 40% reduction in navigation time and supported mixed-ability tasks.
  32. VisCourt: In-Situ Guidance for Interactive Tactic Training in Mixed Reality

    Cheng, Liqi and Jia, Hanze and Yu, Lingyun and Wu, Yihong and Ye, Shuainan and Deng, Dazhen and Zhang, Hui and Xie, Xiao and Wu, Yingcai Background: Team sports tactics are complex, needing extensive practice and awareness. Purpose: Traditional methods fail to link theory and real- world application of tactics. System: VisCourt, an MR training system, simulates realistic 3D tactical scenarios. Evaluation: A user study with athletes assessed VisCourt's effectiveness and satisfaction. Result: VisCourt enhances learning experiences, guiding future SportsXR design.
  33. Block and Detail: Scaffolding Sketch- to-Image Generation Sarukkai, Vishnu and

    Yuan, Lu and Tang, Mia and Agrawala, Maneesh and Fatahalian, Kayvon Background: Artists need tools aligning with their iterative refinement. Purpose: The study addresses generating images from sketches with improved accuracy. System: A two-pass algorithm for sketch-to-image refining using ControlNet is proposed. Evaluation: Evaluations include user feedback and comparisons against Scribble ControlNet. Result: Images from our model preferred over baseline by novice users in 84% of cases.
  34. EVE: Enabling Anyone to Train Robots using Augmented Reality Wang,

    Jun and Chang, Chun-Cheng and Duan, Jiafei and Fox, Dieter and Krishna, Ranjay Background: Robot hardware affordability leads to more robots in everyday tasks. Purpose: Training robots requires expensive trajectory data, limiting accessibility. System: EVE is an iOS app for easy augmented reality robot training without hardware. Evaluation: A user study with 3 tasks showed EVE's high success rate compared to others. Result: EVE enables personalized robot training, performing well in various metrics.
  35. avaTTAR: Table Tennis Stroke Training with Embodied and Detached Visualization

    in Augmented Reality Ma, Dizhi and Hu, Xiyun and Shi, Jingyu and Patel, Mayank and Jain, Rahul and Liu, Ziyi and Zhu, Zhengzhe and Ramani, Karthik Background: Table tennis stroke training is essential for player development. Purpose: The research aims to enhance training with augmented reality. System: Introducing avaTTAR, which uses on-body and detached AR visual cues for training. Evaluation: A user study confirmed improvements in player experience and results. Result: avaTTAR effectively amplifies stroke training through dual perspectives.
  36. Don't Mesh Around: Streamlining Manual-Digital Fabrication Workflows with Domain-Specific 3D

    Scanning Moyer, Ilan E and Bourgault, Samuelle and Frost, Devon and Jacobs, Jennifer Background: Material-driven design is vital in manual ceramics and often clashes with digital approaches. Purpose: The research addresses the clash in ceramics fabrication processes using novel methods. System: The CAS system integrates 3D scanning and printing with traditional pottery techniques. Evaluation: The system enables creation of 3D toolpaths directly from pottery geometry. Result: CAS supports material-first design while retaining software-based design expressiveness.
  37. E-Joint: Fabrication of Large-Scale Interactive Objects Assembled by 3D Printed

    Conductive Parts with Copper Li, Xiaolong and Yao, Cheng and Shi, Shang and Feng, Shuyue and Zhou, Yujie and Dong, Haoye and Huang, Shichao and Cai, Xueyan and Jin, Kecheng and Ying, Fangtian and Wang, Guanyun Background: 3D printing of interactive objects is hindered by size constraints and resistance issues. Purpose: The paper addresses challenges in large-scale interactive object fabrication. System: E-Joint uses mortise and tenon with copper plating for conductive 3D parts. Evaluation: Electrified joint feasibility tested through the construction of three applications. Result: E-Joint demonstrated usability in making large- scale interactive objects.
  38. MobiPrint: A Mobile 3D Printer for Environment-Scale Design and Fabrication

    Campos Zamora, Daniel and He, Liang and Froehlich, Jon E. Background: 3D printing lacks real-world integration, making scaling and measuring tedious. Purpose: Address limited environmental adaptability in traditional 3D printing methods. System: Introduce MobiPrint, a mobile system enabling environment-scale 3D fabrication. Evaluation: Use proof-by-demonstration to assess system potential and perform technical evaluation. Result: Demonstrated potential in varied applications, but challenges and future opportunities remain.
  39. StructCurves: Interlocking Block- Based Line Structures Sun, Zezhou and Balkcom,

    Devin and Whiting, Emily Background: Traditional zippers lack rigid interlock for building strong curved structures. Purpose: The research solves making flexible chains rigid when assembled. System: Novel block design with revolute joints enables programmable curvature interlocks. Evaluation: Assess structure rigidity using mechanical testing and show various applications. Result: The new method proves to create rigid and flexible curved interlocking structures.
  40. BlendScape: Enabling End-User Customization of Video-Conferencing Environments through Generative AI

    Rajaram, Shwetha and Numan, Nels and Kumaravel, Balasaravanan Thoravi and Marquardt, Nicolai and Wilson, Andrew D Background: Video-conferencing lacks dynamic environmental customization. Purpose: Enable end-users to tailor video-conferencing environments to their needs. System: BlendScape uses AI to blend backgrounds into customized environments. Evaluation: Tested with 15 users to assess the value of generative customization. Result: Users see potential but need controls against distractions and unrealism.
  41. SpaceBlender: Creating Context-Rich Collaborative Spaces Through Generative 3D Scene Blending

    Numan, Nels and Rajaram, Shwetha and Kumaravel, Balasaravanan Thoravi and Marquardt, Nicolai and Wilson, Andrew D Background: Generative AI in VR lacks incorporation of users' physical context. Purpose: The aim is to support VR telepresence through context-rich environments. System: SpaceBlender creates virtual spaces by blending physical and virtual elements. Evaluation: A 20-participant study compared SpaceBlender to a generic VR and a top framework. Result: Participants valued SpaceBlender's familiarity but noted distractions from complexities.
  42. MyWebstrates: Webstrates as Local- first Software Klokmose, Clemens Nylandsted and

    Eagan, James R. and van Hardenberg, Peter Background: Webstrates enable sharing across domains but need central servers. Purpose: Addressing compatibility with local-first principles and data control. System: MyWebstrates applies interoperability allowing offline collaboration. Evaluation: Demonstrates use with non-web tech, keeping data sovereignty. Result: Enables new applications and poses new challenges.
  43. SituationAdapt: Contextual UI Optimization in Mixed Reality with Situation Awareness

    via LLM Li, Zhipeng and Gebhardt, Christoph and Inglin, Yves and Steck, Nicolas and Streli, Paul and Holz, Christian Background: Mobile Mixed Reality needs adaptive UIs for varied contexts. Purpose: Solve static UI issues using adaptive layouts in Mixed Reality. System: SituationAdapt uses perception, reasoning, and optimization modules for UI adaptation. Evaluation: The reasoning module was validated against human experts and via an online study. Result: SituationAdapt outperforms existing methods in context-aware layout adaptation.
  44. Desk2Desk: Optimization-based Mixed Reality Workspace Integration for Remote Side-by-side Collaboration

    Sidenmark, Ludwig and Zhang, Tianyu and Al Lababidi, Leen and Li, Jiannan and Grossman, Tovi Background: Mixed Reality creates adaptive hybrid workspaces but causes alignment issues. Purpose: Aligning inconsistent workspaces to enhance collaboration needs a solution. System: Our system Desk2Desk optimizes workspace layout and monitor sharing. Evaluation: A user study shows how Desk2Desk merges dissimilar workspaces effectively. Result: It enables immersive side-by-side collaboration despite physical constraints.
  45. UIClip: A Data-driven Model for Assessing User Interface Design Wu,

    Jason and Peng, Yi-Hao and Li, Xin Yue Amanda and Swearngin, Amanda and Bigham, Jeffrey P and Nichols, Jeffrey Background: UI design ensures usability and aesthetic of applications, challenging task. Purpose: Aimed to develop a model, UIClip, to assess design quality and relevance. System: UIClip uses screenshots and descriptions to score and suggest on UI designs. Evaluation: Compared UIClip with baselines, using human ratings from 12 designers. Result: UIClip had highest agreement with designers, enabling useful application demos.
  46. UICrit: Enhancing Automated Design Evaluation with a UI Critique Dataset

    Duan, Peitong and Cheng, Chin-Yi and Li, Gang and Hartmann, Bjoern and Li, Yang Background: Automated UI evaluation aids designers but lacks human-level performance. Purpose: LLM-based evaluations need enhancement to match human evaluator capabilities. System: Develop a dataset of UI critiques to improve LLMs' evaluation performance. Evaluation: Analyzed 3,059 critiques from experienced designers for dataset quality. Result: Achieved 55% improvement in LLM UI feedback using few-shot and visual prompts.
  47. EyeFormer: Predicting Personalized Scanpaths with Transformer-Guided Reinforcement Learning Jiang, Yue

    and Guo, Zixin and Rezazadegan Tavakoli, Hamed and Leiva, Luis A. and Oulasvirta, Antti Background: Modern GUIs are complex, requiring accurate attention prediction models. Purpose: Existing models fail to predict individual scanpaths accurately. System: EyeFormer predicts personalized scanpaths using Transformers and RL. Evaluation: EyeFormer uses user scanpath samples to predict gaze locations. Result: EyeFormer aids in GUI layout optimization with personalized predictions.
  48. GPTVoiceTasker: Advancing Multi-step Mobile Task Efficiency Through Dynamic Interface Exploration

    and Vu, Minh Duc and Wang, Han and Chen, Jieshan and Li, Zhuang and Zhao, Shengdong and Xing, Zhenchang and Chen, Chunyang Background: Virtual assistants face challenges in efficiency and understanding user intentions. Purpose: The research aims to enhance task efficiency on mobile devices using virtual assistants. System: GptVoiceTasker uses LLMs to interpret commands and execute mobile interactions. Evaluation: It achieves high accuracy in parsing commands and automating tasks via experiments and Result: GptVoiceTasker boosts task efficiency by 34.85% with positive user feedback.
  49. VisionTasker: Mobile Task Automation Using Vision Based UI Understanding and

    LLM Task Planning Song, Yunpeng and Bian, Yiheng and Tang, Yongtao and Ma, Guiyu and Cai, Zhongmin Background: Mobile task automation faces challenges from app updates and UI hierarchies. Purpose: The research aims to enhance task automation with VisionTasker framework. System: VisionTasker uses vision-based UI and LLM for stepwise mobile task automation. Evaluation: Tested VisionTasker with four datasets and 147 real-world tasks for validation. Result: VisionTasker outperforms previous methods and humans in unfamiliar tasks.
  50. Empower Real-World BCIs with NIRS- X: An Adaptive Learning Framework

    that Harnesses Unlabeled Brain Wang, Liang and Zhang, Jiayan and Liu, Jinyang and McKeon, Devon and Brizan, David Guy and Blaney, Giles and Jacob, Robert J.K Background: BCIs with fNIRS are promising but need costly user calibration. Purpose: To tackle data scarcity in BCIs by using unlabeled fNIRS signals. System: NIRS-X uses NIRSiam and NIRSformer to adaptively learn from unlabeled data. Evaluation: Tested on fNIRS2MW datasets to assess adaptation to new users and tasks. Result: NIRS-X matched or outperformed supervised methods in fNIRS-based BCI tasks.
  51. Understanding the Effects of Restraining Finger Coactivation in Mid- Air

    Typing: from a Neuromechanical Zhang, Hechuan and Liang, Xuewei and Lei, Ying and Chen, Yanjun and He, Zhenxuan and Zhang, Yu and Chen, Lihan and Lin, Hongnan and Han, Teng and Tian, Feng Background: Mid-air typing is intuitive but challenged by finger coactivation. Purpose: The study investigates neuromechanical impacts of coactivation to improve interactions. System: A wearable device restrains coactivation to explore its effects on mid-air tasks. Evaluation: fNIRS assesses cortical activity in tasks to gauge motor execution load. Result: Coactivation restraint reduces mispresses and conserves neural resources.
  52. What is Affective Touch Made Of? A Soft Capacitive Sensor

    Array Reveals the Interplay between Shear, Normal McLaren, Devyani and Gao, Jian and Yin, Xiulun and Reis Guerra, R'{u}bia and Vyas, Preeti and Morton, Chrys and Cang, Xi Laura and Chen, Yizhong and Sun, Yiyuan and Li, Ying and Madden, John David Wyndham and Background: Current sensors lack emotion-expressive richness of mammalian skin sensors. Purpose: Affective touch can enhance human-robot interaction by estimating emotions from touch. System: A flexible soft sensor array captures multitouch normal and shear stresses. Evaluation: Deep-learning classification shows accuracy increases with shear data inclusion. Result: 88% accuracy with shear data confirms shear stress's expressive centrality.
  53. Exploring the Effects of Sensory Conflicts on Cognitive Fatigue in

    VR Remappings Luo, Tianren and Chen, Gaozhang and Wen, Yijian and Wang, Pengxiang and fan, yachun and Han, Teng and Tian, Feng Background: VR presents cognitive challenges due to its immersive nature and sensory conflicts. Purpose: This study investigates how sensory conflicts in VR affect cognitive fatigue. System: Three remapping methods are used to study sensory conflicts' impact on perception. Evaluation: Experiments involve cognitive tasks and subjective/physiological measures. Result: Remappings affect fatigue onset/severity, with a visual-vestibular conflict being most impactful.
  54. Vision-Based Hand Gesture Customization from a Single Demonstration Shahi, Soroush

    and Mollyn, Vimal and Tymoszek Park, Cori and Kang, Runchang and Liberman, Asaf and Levy, Oron and Gong, Jun and Bedri, Abdelkareem and Laput, Gierad Background: Hand gesture recognition enhances human- computer interaction as cameras spread. Purpose: The aim is to enable users to customize gestures that are natural and memorable. System: Our method uses transformers and meta- learning for customization from one demonstration. Evaluation: We demonstrate 3 real-world apps using our method and conduct a user study. Result: We achieve up to 94% accuracy in gesture recognition from a single demonstration.
  55. VirtualNexus: Enhancing 360-Degree Video AR/VR Collaboration with Environment Cutouts and

    Virtual Huang, Xincheng and Yin, Michael and Xia, Ziyi and Xiao, Robert Background: Asymmetric AR/VR systems integrate remote users into shared spaces. Purpose: Improve 360-degree video interactivity by adding spatial context. System: Introduce VirtualNexus, enhancing VR with environment cutouts and virtual replicas. Evaluation: Demonstrated with three applications and tested in a dyadic usability study. Result: VirtualNexus enhances telepresence by improving interaction clarity and versatility.
  56. Personal Time-Lapse Tran, Nhan and Yang, Ethan and Taylor, Angelique

    and Davis, Abe Background: Subtle developments in body movements are hard to observe over time. Purpose: The research aims to help users record and visualize long-term changes. System: They propose a mobile tool using 3D tracking and computational imaging for time-lapse. Evaluation: A formative study and user evaluations demonstrate tool effectiveness. Result: The tool effectively tracks visual changes of subjects over time in challenging examples.
  57. Chromaticity Gradient Mapping for Interactive Control of Color Contrast in

    Images and Video Yan, Ruyu and Sun, Jiatian and Davis, Abe Background: Color contrast is key in enhancing details in images and video. Purpose: To develop a tool that adjusts color contrast without lightness loss. System: Introduce a tool to enhance details using local chromaticity manipulations. Evaluation: Use a familiar interface to test effects on images and video. Result: Tool enhances contrast of details while maintaining original lightness.
  58. Memolet: Reifying the Reuse of User-AI Conversational Memories Yen, Ryan

    and Zhao, Jian Background: Frequent interactions with AI risk exceeding memory for tailored responses. Purpose: A method is needed to retrieve and reuse pertinent conversational memories. System: Memolet is an interactive object for articulating and manipulating memory reuse. Evaluation: A study with N=12 subjects assessed Memolet’s functionality across reuse stages. Result: Insights offer design implications for systems aiding in conversational memory reuse.
  59. VIME: Visual Interactive Model Explorer for Identifying Capabilities and Limitations

    of Machine Learning Das Antar, Anindya and Molaei, Somayeh and Chen, Yan-Ying and Lee, Matthew L and Banovic, Nikola Background: Ensuring ML models make correct inferences is vital for high-stakes contexts. Purpose: Explain sequential ML models decision-making with XAI tools effectively. System: Introduce VIME, an interactive XAI toolbox for explaining sequential model decisions. Evaluation: Evaluated VIME with 14 experts using case studies versus a baseline XAI tool. Result: VIME improved identification and explanation of model errors over the baseline.
  60. SERENUS: Alleviating Low-Battery Anxiety Through Real-time, Accurate, and User-Friendly Energy

    Lee, Sera and Jeong, Dae R. and Choi, Junyoung and Kwak, Jaeheon and Son, Seoyun and Song, Jean Y and Shin, Insik Background: Mobiles cause low-battery anxiety due to dependence and battery reliance. Purpose: Research aims to improve user experience via real-time energy use prediction. System: Introduce Serenus, a framework for app-level energy consumption prediction. Evaluation: User studies confirm Serenus effectively alleviates anxiety by aiding usage planning. Result: Requirements summarized outline anxiety mitigation for future system frameworks.
  61. ScrapMap: Interactive Color Layout for Scrap Quilting Leake, Mackenzie and

    Daly, Ross Background: Scrap quilting is complex due to arranging various fabric pieces visually. Purpose: The research aims to design efficient color layouts for scrap quilting. System: They propose ScrapMap, a tool using graph coloring concepts for quilt design. Evaluation: User evaluations showed how quilters interact with ScrapMap's tools and features. Result: Quilters found ScrapMap helpful for visualizing and using fabric scraps creatively.
  62. What's in a cable? Abstracting Knitting Design Elements with Blended

    Raster/Vector Primitives Twigg-Smith, Hannah and Peng, Yuecheng and Whiting, Emily and Peek, Nadya Background: Chart-based programming complicates knitting design iteration due to manual workflows. Purpose: This research aims to simplify knitting designs by addressing stitch-level dependencies. System: A new method combining vector/raster primitives is proposed for knitting abstraction. Evaluation: The method is implemented in a tool simulating knit structures for techniques like intarsia. Result: This tool supports complex patterns like origami pleats and sensor patches, enabling higher-level designs.
  63. Embrogami: Shape-Changing Textiles with Machine Embroidery Jiang, Yu and Haynes,

    Alice C and Pourjafarian, Narjes and Borchers, Jan and Steimle, J"{u}rgen Background: Machine embroidery creates static patterns limiting interaction options. Purpose: The research aims to enable shape-changing textiles for more versatile interaction. System: Embrogami uses embroidery to create bistable or elastic structures on textiles. Evaluation: Technical experiments and a software tool help users design and deploy Embrogami. Result: Examples showcase Embrogami's potential in creating flexible, shape-changing textiles.
  64. KODA: Knit-program Optimization by Dependency Analysis Hofmann, Megan Background: Digital

    knitting machines offer diverse capabilities hindered by limited CAD tools. Purpose: This research aims to develop a program optimizer to enhance knitting machine efficiency. System: KODA optimizes knit-programs by re-ordering instructions to reduce time and errors. Evaluation: KODA's effectiveness was assessed by analyzing instruction reordering and reduction. Result: KODA enables more readable, intuitive programs, improving reliability and efficiency.
  65. X-Hair: 3D Printing Hair-like Structures with Multi-form, Multi-property and Multi-function

    Wang, Guanyun and Ji, Junzhe and Xu, Yunkai and Ren, Lei and Wu, Xiaoyang and Zheng, Chunyuan and Zhou, Xiaojing and Tang, Xin and Feng, Boyu and Sun, Lingyun and Tao, Ye and Li, Jiaji Background: 3D-printing hair-like structures enables diverse applications and functionality. Purpose: Aim to create customizable 3D-printed hair with varied forms and properties. System: Introduces X-Hair method using a two-step print strategy and a design tool. Evaluation: Evaluated properties and demonstrated applications using customized parameters. Result: X-Hair is practical for biomimicry, decoration, and other functional uses.
  66. TouchpadAnyWear: Textile-Integrated Tactile Sensors for Multimodal High Spatial-Resolution Touch Inputs

    with Zhao, Junyi and Preechayasomboon, Pornthep and Christensen, Tyler and Memar, Amirhossein H. and Shen, Zhenzhen and Colonnese, Nicholas and Khbeis, Michael and Zhu, Mengjia Background: Wearables need high-resolution touch interfaces with motion tolerance. Purpose: Address wearable touch sensors' need for motion artifact tolerance. System: A textile force sensor with high spatial resolution and capacitive sensing. Evaluation: Used mechanical characterization and user evaluations on touch sensor. Result: Proves usable in daily interactions and highlights new wearable applications.
  67. SolePoser: Full Body Pose Estimation using a Single Pair of

    Insole Sensor Wu, Erwin and Khirodkar, Rawal and Koike, Hideki and Kitani, Kris Background: Current 3D pose estimation relies on cameras or bulky sensors. Purpose: Research aims to enable minimal setup for real-time full-body pose estimation. System: SolePoser uses insole sensors and a 2-stream transformer for 3D joint prediction. Evaluation: Two datasets were introduced with 908k frames across eight activities for testing. Result: SolePoser's performance rivals top methods and exceeds camera-based systems in some cases.
  68. Gait Gestures: Examining Stride and Foot Strike Variation as an

    Input Method While Walking Tsai, Ching-Yi and Yen, Ryan and Kim, Daekun and Vogel, Daniel Background: Walking consists of cyclic stride pairs, forming a continuous gait pattern. Purpose: To explore intentional gait variation for input actions during walking. System: 22 candidate Gait Gestures are designed for non-disruptive input methods. Evaluation: A formative study assesses gesture easiness, acceptability, and walking fit. Result: 7 gestures enable AR control tasks; a gesture recognizer confirms feasibility.
  69. EgoTouch: On-Body Touch Input Using AR/VR Headset Cameras Mollyn, Vimal

    and Harrison, Chris Background: AR/VR interfaces benefit from speedy on- body touch input. Purpose: Explore bare hands touch input using modern XR headset cameras. System: Utilize RGB cameras for on-body input without special instruments. Evaluation: Tested accuracy in varied lighting, skin tones, and while walking. Result: Our method provides rich metadata for practical on-skin interfaces.
  70. MobilePoser: Real-Time Full-Body Pose Estimation and 3D Human Translation from

    IMUs in Mobile Xu, Vasco and Gao, Chenfeng and Hoffmann, Henry and Ahuja, Karan Background: Current motion capture trends aim to reduce equipment and increase accessibility. Purpose: Devices like phones and earbuds struggle with sensor noise and drift issues. System: MobilePoser: real-time system for full-body pose using deep networks and IMUs. Evaluation: Uses deep neural networks and physics- based optimization for evaluation. Result: MobilePoser demos show potential in health and gaming with accurate estimations.
  71. Touchscreen-based Hand Tracking for Remote Whiteboard Interaction Liu, Xinshuang and

    Zhang, Yizhong and Tong, Xin Background: Seamless whiteboard interactions need accurate hand-screen integration. Purpose: Existing methods need bulky devices or lack accuracy in hand pose tracking. System: We propose a network to infer 3D hand poses from capacitive video frames. Evaluation: A dataset was used to test improvements on accuracy and stability in tracking. Result: Our method achieves superior hand tracking for remote communication setups.
  72. SeamPose: Repurposing Seams as Capacitive Sensors in a Shirt for

    Upper-Body Pose Tracking Yu, Tianhong Catherine and Zhang, Manru Mary and He, Peter and Lee, Chi-Jung and Cheesman, Cassidy and Mahmud, Saif and Zhang, Ruidong and Guimbretiere, Francois and Zhang, Cheng Background: Seams could be used innovatively for capacitive sensors in wearable tech. Purpose: The study aims to enable upper-body pose estimation using shirt seams. System: SeamPose uses machine-sewn threads on seams for discrete pose-tracking. Evaluation: A 12-participant study assessed the pose accuracy with deep-learning methods. Result: The shirt achieves a mean joint error of 6.0 cm, supporting unobtrusive smart wear.
  73. Story-Driven: Exploring the Impact of Providing Real-time Context Information on

    Automated Storytelling Belz, Jan Henry and Weilke, Lina Madlin and Winter, Anton and Hallgarten, Philipp and Rukzio, Enrico and Grosse-Puppendahl, Tobias Background: Static storytelling lacks adaptation to listener's changing environments. Purpose: Research aims to enhance storytelling with real- time adaptive context. System: System continuously adjusts stories using environment and arrival times. Evaluation: User experience assessed through studies with 30 participants in vehicles. Result: Context-driven stories showed significant improvement over traditional methods.
  74. Lumina: A Software Tool for Fostering Creativity in Designing Chinese

    Shadow Puppets Yao, Zhihao and Lu, Yao and Sun, Qirui and Lyu, Shiqing and Li, Hanxuan and Yang, Xing-Dong and Wang, Xuezhu and Liu, Guanhong and Mi, Haipeng Background: Digital transition challenges shadow puppetry, stifling creativity, particularly for novices. Purpose: The research aims to aid early design phase creativity in Chinese shadow puppetry. System: Introducing Lumina, a tool offering templates and animations to aid shadow puppet design. Evaluation: Evaluated with 18 creators showing Lumina's effectiveness in creating diverse designs. Result: Participants crafted designs ranging from traditional to science-fiction themes successfully.
  75. PortalInk: 2.5D Visual Storytelling with SVG Parallax and Waypoint Transitions

    Zhou, Tongyu and Yang, Joshua Kong and Chan, Vivian Hsinyueh and Chung, Ji Won and Huang, Jeff Background: Artists face challenges in translating 2D imagery to 3D scenes. Purpose: Enable artists to create 2.5D stories easily in 2D space. System: PortalInk uses SVG for parallax effects and waypoint transitions. Evaluation: Three case studies show how artists use PortalInk effectively. Result: Artists can craft immersive comics and more with PortalInk.
  76. DrawTalking: Building Interactive Worlds by Sketching and Speaking Rosenberg, Karl

    Toby and Kazi, Rubaiat Habib and Wei, Li-Yi and Xia, Haijun and Perlin, Ken Background: Creation of interactive worlds needs simplification for non-programmers. Purpose: The study aims to grant programming-like control via sketching and speaking. System: Proposes DrawTalking, merging sketch-based methods with verbal storytelling. Evaluation: An early study assesses its broad applicability in creative-exploratory use. Result: Findings show potential for new natural interfaces in creative exploration.
  77. Patchview: LLM-powered Worldbuilding with Generative Dust and Magnet Visualization Chung,

    John Joon Young and Kreminski, Max Background: LLMs can generate world elements, but sensemaking remains challenging. Purpose: To address overwhelming complexity in generated elements through better control. System: Patchview uses magnets and dust metaphors for intuitive world element interaction. Evaluation: A user study examines how Patchview supports sensemaking and element steering. Result: Patchview shows customizable visuals can align AI with user intentions.
  78. An Interactive System for Supporting Creative Exploration of Cinematic Composition

    Designs He, Rui and Wei, Huaxin and Cao, Ying Background: Cinematic composition in filmmaking is difficult, needing skill and creativity. Purpose: Research aims to simplify creating cinematic compositions in virtual environments. System: Introduces Cinemassist, a tool to aid creative design using cinematic proposals. Evaluation: Utilizes a deep generative model for plausible camera poses in an interactive workflow. Result: Cinemassist enhances design quality, especially benefitting users with animation expertise.
  79. SIM2VR: Towards Automated Biomechanical Testing in VR Fischer, Florian and

    Ikkala, Aleksi and Klar, Markus and Fleig, Arthur and Bachinski, Miroslav and Murray- Smith, Roderick and H"{a}m"{a}l"{a}inen, Perttu and Oulasvirta, Antti and M"{u}ller, J"{o}rg Background: Automated testing can improve early VR design by predicting biomechanics. Purpose: Current simulators lack real-world fidelity in VR user behavior prediction. System: Sim2vr closes the loop between user simulation and VR application. Evaluation: Sim2vr predicts user performance in a dynamic arcade game. Result: Broader testing needs cognitive model and reward function advancements.
  80. Hands-on, Hands-off: Gaze-Assisted Bimanual 3D Interaction Lystb\ae{}k, Mathias N. and

    Mikkelsen, Thorbj\o{}rn and Krisztandl, Roland and Gonzalez, Eric J and Gonzalez- Franco, Mar and Gellersen, Hans and Pfeuffer, Ken Background: XR systems with hand-tracking facilitate direct bimanual object manipulation. Purpose: To explore gaze-based transformations in bimanual interactions for flexible operations. System: Introduce three Gaze+Pinch modes enhancing indirect interaction in XR environments. Evaluation: Conduct a user study on asymmetric tasks to assess indirect versus direct inputs. Result: Indirect bimanual input reduces effort; users prefer NDH for direct object orientation.
  81. Pro-Tact: Hierarchical Synthesis of Proprioception and Tactile Exploration for Eyes-Free

    Ray Pointing on Out-of- Kim, Yeonsu and Yim, Jisu and Kim, Kyunghwan and Yun, Yohan and Lee, Geehyuk Background: VR menus can be hard to navigate without visual cues. Purpose: Existing methods lack precision and cause discomfort in OoV interactions. System: Pro-Tact combines proprioception with tactile adjustments for accurate menu access. Evaluation: A user study assessed interaction accuracy, fatigue, and sickness. Result: Pro-Tact achieved 95% accuracy with reduced fatigue and voluntary eyes-free use.
  82. GradualReality: Enhancing Physical Object Interaction in Virtual Reality via Interaction

    State-Aware Blending Seo, HyunA and Yi, Juheon and Balan, Rajesh and Lee, Youngki Background: Physical interaction in VR is complex due to shifting user attention. Purpose: The research aims to maintain presence and usability in Cross Reality. System: Interaction State-Aware Blending balances immersion and interaction. Evaluation: User studies and interviews validated a working prototype for comparison. Result: GradualReality enhances Cross Reality experiences beyond the baselines.
  83. StegoType: Surface Typing from Egocentric Cameras Richardson, Mark and Botros,

    Fadi and Shi, Yangyang and Guo, Pinhao and Snow, Bradford J and Zhang, Linguang and Dong, Jingming and Vertanen, Keith and Ma, Shugao and Wang, Robert Background: Text input remains challenging in AR/VR environments despite its importance. Purpose: The study aims to enable natural touch typing using egocentric cameras. System: A deep learning model decodes touch typing from uninstrumented surfaces. Evaluation: User study (n=18) assessed text input method vs physical keyboards. Result: Mean throughput of 42.4 WPM with 7% error, showing progress toward AR/VR typing.
  84. Eye-Hand Movement of Objects in Near Space Extended Reality Wagner,

    Uta and Asferg Jacobsen, Andreas and Feuchtner, Tiare and Gellersen, Hans and Pfeuffer, Ken Background: Extended Reality enhances interaction through intuitive direct hand gestures. Purpose: This study aims to integrate eye-tracking to minimize effort in XR interactions. System: Gaze controls object translation in X-Y while hands refine movement on the Z-axis. Evaluation: Four movement techniques were tested against baselines in a user study with 24 participants. Result: Eye-hand techniques significantly cut physical effort compared to direct gestures in XR.
  85. ProgramAlly: Creating Custom Visual Access Programs via Multi-Modal End- User

    Programming Herskovitz, Jaylin and Xu, Andi and Alharbi, Rahaf and Guo, Anhong Background: Visual assistive tech lacks customization for blind users' unique needs. Purpose: This research solves the lack of customizable visual access for blind users. System: ProgramAlly lets users create custom visual filters via end-user programming. Evaluation: User studies with 12 blind adults assessed preference for programming approaches. Result: Users used ProgramAlly to create programs addressing unique accessibility challenges.
  86. Accessible Gesture Typing on Smartphones for People with Low Vision

    Zhang, Dan and Li, Zhi and Ashok, Vikas and Seiple, William H and Ramakrishnan, Iv and Bi, Xiaojun Background: Gesture typing is limited for low vision users on touchscreen keyboards. Purpose: To develop more accessible keyboards for low vision users. System: Design of layout-magnified and key-magnified keyboard prototypes for gesture typing. Evaluation: User study compared the key-magnified keyboard and conventional keyboards with voice feedback. Result: The key-magnified keyboard increased typing speed by 27.5%.
  87. AccessTeleopKit: A Toolkit for Creating Accessible Web-Based Interfaces for Tele-Operating

    an Assistive Robot Ranganeni, Vinitha and Dhat, Varad and Ponto, Noah and Cakmak, Maya Background: Existing interfaces for robots aren't accessible for severe motor impairments. Purpose: The work seeks to improve tele-operation accessibility for motor-limited individuals. System: AccessTeleopKit enables custom and accessible interfaces for Stretch 3 via open-source. Evaluation: Participatory design and deployment with diverse users tested AccessTeleopKit's utility. Result: Various tasks were achieved, highlighting customization for user-specific needs.
  88. Memory Reviver: Supporting Photo- Collection Reminiscence for People with Visual

    Impairment via a Proactive Xu, Shuchang and Chen, Chang and Liu, Zichen and Jin, Xiaofu and Yuan, Lin-Ping and Yan, Yukang and Qu, Huamin Background: Reminiscing with photos benefits psychology but is challenging for PVI. Purpose: The research aims to enable photo reminiscence for PVI without needing sighted help. System: Memory Reviver: a chatbot that helps PVI explore photo memories via natural language. Evaluation: Tested with twelve PVI to assess reminiscence facilitation and conversation quality. Result: Memory Reviver enhanced photo understanding and reminiscence for PVI effectively.
  89. VizAbility: Enhancing Chart Accessibility with LLM-based Conversational Interaction Gorniak, Joshua

    and Kim, Yoon and Wei, Donglai and Kim, Nam Wook Background: Traditional methods underrepresent data visualization for accessibility needs. Purpose: To enhance chart accessibility through conversational interaction for visually impaired users. System: VizAbility uses a LLM-based system allowing natural language chart navigation. Evaluation: The multimodal approach of VizAbility was assessed through qualitative and quantitative methods. Result: Results suggest potential for integration in workflows and improved testing with vision models.
  90. Computational Trichromacy Reconstruction: Empowering the Color-Vision Deficient to Recognize Zhu,

    Yuhao and Chen, Ethan and Hascup, Colin and Yan, Yukang and Sharma, Gaurav Background: Color Vision Deficiencies (CVD) affect color perception and cause naming confusions. Purpose: The research aims to enable CVD individuals to accurately recognize colors. System: We propose an AR system to transform color space for clearer color distinctions. Evaluation: We conduct psychophysical experiments and user studies to test perceptual shifts. Result: Users report enhanced color recognition using the AR App in real-world tasks.
  91. DiscipLink: Unfolding Interdisciplinary Information Seeking Process via Human-AI Co-Exploration Zheng,

    Chengbo and Zhang, Yuanhao and Huang, Zeyu and Shi, Chuhan and Xu, Minrui and Ma, Xiaojuan Background: Researchers struggle with interdisciplinary exploration across fields. Purpose: To assist researchers in interdisciplinary information seeking (IIS). System: DiscipLink offers AI-driven question tools for tailored research queries. Evaluation: A comparative and exploratory study evaluates DiscipLink effectiveness. Result: DiscipLink aids in integrating scattered knowledge and supports IIS.
  92. Improving Steering and Verification in AI-Assisted Data Analysis with Interactive

    Task Decomposition Kazemitabaar, Majeed and Williams, Jack and Drosos, Ian and Grossman, Tovi and Henley, Austin Zachary and Negreanu, Carina and Sarkar, Advait Background: LLM-powered tools help tackle complex data analysis programming. Purpose: AI-generated results' verification and steering present significant challenges. System: Two approaches: Stepwise subgoals and Phasewise logical phases for AI tasks. Evaluation: Controlled experiment (n=18) compared new systems with a conversational baseline. Result: Users reported greater control, suggesting new design guidelines for AI tools.
  93. VizGroup: An AI-assisted Event-driven System for Collaborative Programming Learning Analytics

    Tang, Xiaohang and Wong, Sam and Pu, Kevin and Chen, Xi and Yang, Yalong and Chen, Yan Background: Collaborative learning like Peer Instruction often fails due to diverse mental models. Purpose: This research aims to improve collaboration oversight using an AI-assisted system. System: VizGroup helps instructors track real-time collaboration using LLMs for event recommendations. Evaluation: VizGroup was evaluated with 12 instructors using comparison study and collected datasets. Result: VizGroup enabled effective tracking of student behavior nuances in large courses.
  94. Who did it? How User Agency is influenced by Visual

    Properties of Generated Images Didion, Johanna K. and Wolski, Krzysztof and Wittchen, Dennis and Coyle, David and Leimk"{u}hler, Thomas and Strohmeier, Paul Background: AI and GenAI interfaces need to meet specific affordances and human needs. Purpose: To explore how AI-generated image properties influence user agency experience. System: Study measures temporal binding and magnitude estimation for agency assessment. Evaluation: Used image types to analyze temporal binding and agency judgment differences. Result: Temporal binding correlates with semantic differences, while agency aligns with local ones.
  95. FathomGPT: A natural language interface for interactively exploring ocean science

    data Khanal, Nabin and Yu, Chun Meng and Chiu, Jui-Cheng and Chaudhary, Anav and Zhang, Ziyue and Katija, Kakani and Forbes, Angus G. Background: Interactive exploration of ocean data is crucial for advancing marine science. Purpose: Address marine scientists' need for user-friendly tools to analyze FathomNet data. System: Develop FathomGPT, a tool for querying and visualizing ocean data with natural language. Evaluation: Conduct ablation studies to assess name resolution, fine tuning, and prompt modification. Result: FathomGPT successfully enhances data exploration for marine scientists.
  96. VRCopilot: Authoring 3D Layouts with Generative AI Models in VR

    Zhang, Lei and Pan, Jin and Gettig, Jacob and Oney, Steve and Guo, Anhong Background: Immersive authoring creates 3D scenes in VR, needing better user interaction. Purpose: Explore generative AI integration in VR for fluid interactions and creativity. System: VRCopilot uses generative AI for mixed-initiative co-creation in VR. Evaluation: User studies assess manual, scaffolded, and automatic creation methods. Result: Scaffolded creation with wireframes improved user agency.
  97. ProtoDreamer: A Mixed-prototype Tool Combining Physical Model and Generative AI

    to Support Conceptual Zhang, Hongbo and Chen, Pei and Xie, Xuelong and Lin, Chaoyi and Liu, Lianyan and Li, Zhuoshu and You, Weitao and Sun, Lingyun Background: Prototyping is essential in conceptual design, facilitating problem exploration. Purpose: Designers find generative AI challenging due to computer-centered rules. System: ProtoDreamer merges AI and physical prototypes for conceptual design. Evaluation: A study confirmed ProtoDreamer's efficacy in creativity and time efficiency. Result: ProtoDreamer aids in creativity, exposing defects, and encouraging detailed thinking.
  98. TorqueCapsules: Fully-Encapsulated Flywheel Actuation Modules for Designing and Prototyping Movement-

    Yang, Willa Yunqi and Zou, Yifan and Huang, Jingle and Abujaber, Raouf and Nakagaki, Ken Background: Flywheels convert kinetic energy but are hard for interaction prototyping due to safety. Purpose: The research solves flywheel control issues, especially unintuitive operation, enabling safety. System: TorqueCapsules are easy-to-use, self-contained flywheel modules for haptics and interactions. Evaluation: Workshops with novices, experts provided feedback on utilizing TorqueCapsules in applications. Result: TorqueCapsules are versatile, allowing for varied applications like wearables, robots and objects.
  99. AniCraft: Crafting Everyday Objects as Physical Proxies for Prototyping 3D

    Character Animation in Mixed Reality Li, Boyu and Yuan, Linping and Yan, Zhe and Liu, Qianxi and Shen, Yulin and Wang, Zeyu Background: 3D character animation prototyping needs accessible physical proxies. Purpose: Solve the need for affordable and easily accessible prototyping tools. System: AniCraft uses everyday objects as proxies with markers and webcams for animation. Evaluation: User experiments compare AniCraft with traditional animation methods. Result: AniCraft outperforms traditional methods in rapid prototyping.
  100. Mul-O: Encouraging Olfactory Innovation in Various Scenarios Through a Task-Oriented

    Development Gao, Peizhong and Liu, Fan and Wen, Di and Gao, Yuze and Zhang, Linxin and Wang, Chikelei and Zhang, Qiwei and Zhang, Yu and Ma, Shao-en and Lu, Qi and Mi, Haipeng and Xu, Yingqing Background: Olfactory interfaces are hampered by limited application scenarios in HCI. Purpose: Mul-O aims to expand research opportunities by enhancing context adaptation. System: Mul-O enables prototype creation with web UI, API, and wireless display hardware. Evaluation: Tested during a 15-day workshop with 30 participants and various projects. Result: Seven innovative projects validated Mul-O's effectiveness for olfactory innovation.
  101. Fiery Hands: Designing Thermal Glove through Thermal and Tactile Integration

    for Virtual Object Wang, Haokun and Singhal, Yatharth and Gil, Hyunjae and Kim, Jin Ryong Background: Thermal feedback enhances sensory experience in VR object manipulation. Purpose: Researchers aim to integrate thermal and tactile feedback for immersive VR interaction. System: The study proposes thermal actuators combined with tactile to generate diverse sensations. Evaluation: User studies validated perception and effectiveness of localized thermal patterns in VR. Result: The work demonstrates enhanced VR sensory immersion with optimal energy cost.
  102. DexteriSync: A Hand Thermal I/O Exoskeleton for Morphing Finger Dexterity

    Experience Shen, Ximing and Kamiyama, Youichi and Minamizawa, Kouta and Nishida, Jun Background: Hand dexterity relies significantly on skin temperature. Purpose: To design an exoskeleton to adjust and simulate finger dexterity via skin temperature. System: Introduce DexteriSync, an exoskeleton manipulating finger temperature for realistic experience. Evaluation: Validated with a technical evaluation and two user studies. Result: DexteriSync enhances understanding of dexterity challenges, aiding persuasive design of assistive tools.
  103. Flip-Pelt: Motor-Driven Peltier Elements for Rapid Thermal Stimulation and Congruent

    Pressure Kang, Seongjun and Kim, Gwangbin and Hwang, Seokhyun and Park, Jeongju and Elsharkawy, Ahmed Ibrahim Ahmed Mohamed and Kim, SeungJun Background: Thermal feedback in VR to improve user experience requires rapid, congruent stimuli. Purpose: The study aims to enhance haptic experiences in VR through congruent thermal and pressure feedback. System: "Flip-Pelt" uses motor-driven peltier elements for rapid thermal and pressure changes. Evaluation: User ability to recognize heat/cold and stiffness patterns was tested using Flip-Pelt in VR. Result: "Flip-Pelt" improves thermal recognition accuracy and enhances VR haptic experiences.
  104. Hydroptical Thermal Feedback: Spatial Thermal Feedback Using Visible Lights and

    Water Ichihashi, Sosuke and Inami, Masahiko and Ho, Hsin-Ni and Howell, Noura Background: Thermal feedback in HCI is limited by slow and small-area capabilities. Purpose: To enhance spatial thermal feedback by utilizing visible lights in water. System: Hydroptical thermal feedback uses visible lights to create thermal sensations in water. Evaluation: Conducted physical and psychophysical experiments to assess thermal perception. Result: Shorter wavelength lights enhance perceived warmth and enable motion illusions.
  105. ShadowMagic: Designing Human-AI Collaborative Support for Comic Professionals’ Shadowing Ganguly,

    Amrita and Yan, Chuan and Chung, John Joon Young and Sun, Tong Steven and Kiheon, Yoon and Gingold, Yotam and Hong, Sungsoo Ray Background: Shadowing in comics is manual and time- consuming but conveys realism. Purpose: The study seeks to improve shadowing using AI for comic professionals. System: ShadowMagic offers AI-generated shadows, semantics choice, and manual adjustments. Evaluation: Evaluations showed higher satisfaction and easier adoption using ShadowMagic's workflow. Result: Participants were more satisfied with ShadowMagic compared to traditional methods.
  106. What's the Game, then? Opportunities and Challenges for Runtime Behavior

    Generation Jennings, Nicholas and Wang, Han and Li, Isabel and Smith, James and Hartmann, Bjoern Background: PCG algorithmically creates game components, revolutionizing game development. Purpose: This study examines LLM-based runtime behavior generation for games. System: GROMIT generates behaviors compiled at runtime in Unity without developer input. Evaluation: Demonstrations and developer interviws assess GROMIT's impact on gameplay. Result: Developers find concerns in quality, expectations, and fit with workflows.
  107. StyleFactory: Towards Better Style Alignment in Image Creation through Style-Strength-Based

    Control and Zhou, Mingxu and Zhang, Dengming and You, Weitao and Yu, Ziqi and Wu, Yifei and Pan, Chenghao and Liu, Huiting and Lao, Tianyu and Chen, Pei Background: Generative AI struggles with aligning to users' aesthetic style desires. Purpose: The research aims to improve style alignment in image creation. System: StyleFactory uses style-strength-based control to enhance style matching. Evaluation: Technical evaluation and user studies assessed StyleFactory's effectiveness. Result: StyleFactory improves style alignment, boosts creativity, and enhances user experience.
  108. AutoSpark: Supporting Automobile Appearance Design Ideation with Kansei Engineering and

    Generative AI Chen, Liuqing and Jing, Qianzhi and Tsang, Yixin and Wang, Qianyi and Liu, Ruocong and Xia, Duowei and Zhou, Yunzhan and Sun, Lingyun Background: Rapid creation of novel designs aligning with emotional requirements is challenging. Purpose: To help designers align emotional needs with design intentions using creative tools. System: AutoSpark integrates Kansei Engineering and AI for automobile design ideation support. Evaluation: A user study assessed how effectively AutoSpark aids alignment and quality in designs. Result: AutoSpark enhances design quality and emotional alignment better than baseline systems.
  109. Degrade to Function: Towards Eco- friendly Morphing Devices that Function

    Through Programmed Lu, Qiuyu and Yi, Semina and Gan, Mengtian and Huang, Jihong and Zhang, Xiao and Yang, Yue and Shen, Chenyi and Yao, Lining Background: Morphing devices can benefit from controlled material degradation. Purpose: Introduce eco-friendly devices using sequential degradation. System: "Degrade to Function" uses degradations for device transformations. Evaluation: Design includes suitable environmental triggers and material selection. Result: Examples show DtF's versatility across varied environments.
  110. WasteBanned: Supporting Zero Waste Fashion Design Through Linked Edits Zhang,

    Ruowang and Mueller, Stefanie and Bernstein, Gilbert Louis and Schulz, Adriana and Leake, Mackenzie Background: The fashion industry creates textile waste through cut-and-sew garment methods. Purpose: The research aims to support zero waste fashion design via improved layout efficiency. System: WasteBanned is a tool combining CAM and CAD to enhance zero waste garment design. Evaluation: The tool's effectiveness was tested by evaluating designer edits to zero waste patterns. Result: User evaluation showed WasteBanned aids in efficient, varied design for different bodies.
  111. HoloChemie - Sustainable Fabrication of Soft Biochemical Holographic Devices for

    Ubiquitous Sensing Roy, Sutirtha and Chowdhury, Moshfiq-Us-Saleheen and Noim, Jurjaan Onayza and Pandey, Richa and Nittala, Aditya Shekhar Background: Sustainable biomaterials in HCI often focus on electronics integration. Purpose: The study explores biochemical sensing device fabrication sustainably. System: It proposes devices using bio-sourced materials for detecting analytes. Evaluation: Application scenarios demonstrate versatility, discussing sustainability impacts. Result: Novel scheme enables quantitative analyte detection in HCI contexts.
  112. PointerVol: A Laser Pointer for Swept Volumetric Displays Fern'{a}ndez, Unai

    Javier and Sarasate, Iosune and Ezcurdia, I~{n}igo and Lopez-Amo, Manuel and Fern'{a}ndez, Ivan and Marzo, Asier Background: Volumetric displays show 3D content, but traditional pointers show lines not points. Purpose: To develop a device for precise pointing within swept volumetric displays. System: Presenting PointerVol, a modified laser pointer for pointing in 3D space. Evaluation: Implemented timing and distance measurement approaches for performance evaluation. Result: PointerVol increases usability of volumetric displays for presentations.
  113. RFTIRTouch: Touch Sensing Device for Dual-sided Transparent Plane Based on

    Repropagated Frustrated Total Wattanaparinton, Ratchanon and Kitada, Kotaro and Takemura, Kentaro Background: FTIR imaging is common in touch systems but faces size and structure limits. Purpose: The study addresses real-time touch detection without side observation constraints. System: RFTIRTouch uses physics-based estimation for dual-sided retrotfitable detection. Evaluation: Tests show touch position estimates have a 2.1 mm error under optimum conditions. Result: RFTIRTouch supports dual-side sensing and waterproof capabilities with one sensor.
  114. IRIS: Wireless ring for vision-based smart home interaction Kim, Maruchi

    and Glenn, Antonio and Veluri, Bandhav and Lee, Yunseo and Gebre, Eyoel and Bagaria, Aditya and Patel, Shwetak and Gollakota, Shyamnath Background: Size and power constraints limit wireless smart ring integration. Purpose: To solve vision-enabled smart ring use for smart home, meeting SWaP criteria. System: IRIS features a camera, Bluetooth, and IMU for context-adaptive interaction. Evaluation: Tested on 23 participants, comparing IRIS' performance against voice commands. Result: IRIS showed superior performance and user preference over voice commands.
  115. Silent Impact: Tracking Tennis Shots from the Passive Arm Park,

    Junyong and Yang, Saelyne and Jo, Sungho Background: Wearable technology impacts sports but often hinders natural motion. Purpose: Current solutions for tennis tracking are disruptive; improvements sought. System: The Silent Impact system uses passive arm sensors for shot analysis. Evaluation: Neural networks classify six shots from passive arm data with high accuracy. Result: The passive arm method proved effective and comfortable in user studies.
  116. VoicePilot: Harnessing LLMs as Speech Interfaces for Physically Assistive Robots

    Padmanabha, Akhil and Yuan, Jessie and Gupta, Janavi and Karachiwalla, Zulekha and Majidi, Carmel and Admoni, Henny and Erickson, Zackory Background: Physically assistive robots improve independence for those with motor impairments. Purpose: Current LLM-assisted speech interfaces lack human-centric development considerations. System: We propose a speech interface framework using LLMs for assistive robots with iterative testing. Evaluation: Evaluated with 11 older adults using both quantitative and qualitative data. Result: Framework proved effective, offering design guidelines for LLM-based assistive interfaces.
  117. ComPeer: A Generative Conversational Agent for Proactive Peer Support Liu,

    Tianjian and Zhao, Hongzheng and Liu, Yuheng and Wang, Xingbo and Peng, Zhenhui Background: CAs enhance mental health, but existing types limit long-term engagement. Purpose: This research aims to develop a CA for proactive peer support. System: ComPeer uses large language models to offer strategic, adaptive support. Evaluation: A one-week study with 24 participants assessed ComPeer's effectiveness. Result: ComPeer improved support and engagement over baseline user-initiated CAs.
  118. SHAPE-IT: Exploring Text-to-Shape- Display for Generative Shape- Changing Behaviors with

    LLMs Qian, Wanli and Gao, Chenfeng and Sathya, Anup and Suzuki, Ryo and Nakagaki, Ken Background: Traditional shape displays require complex programming for dynamic behaviors. Purpose: The study addresses the need for user-friendly shape-changing technology. System: SHAPE-IT uses LLMs for text-based shape behavior authoring. Evaluation: Performance and user evaluations (N=10) assessed SHAPE-IT's effectiveness. Result: The study shows rapid ideation but reveals accuracy challenges needing solutions.
  119. WaitGPT: Monitoring and Steering Conversational LLM Agent in Data Analysis

    with On-the-Fly Code Xie, Liwenhan and Zheng, Chengbo and Xia, Haijun and Qu, Huamin and Zhu-Tian, Chen Background: LLMs aid data analysis via conversational UIs but make code logic hard to verify. Purpose: The research seeks to boost user understanding and control over LLM-generated code. System: A novel method transforms LLM code into stepwise interactive visualizations. Evaluation: A prototype, WaitGPT, is tested on 12 users to gauge usability and effectiveness. Result: WaitGPT helps in error detection and boosts users' confidence in the analysis.
  120. Rhapso: Automatically Embedding Fiber Materials into 3D Prints for Enhanced

    Interactivity Ashbrook, Daniel and Lin, Wei-Ju and Bentley, Nicholas and Soponar, Diana and Yan, Zeyu and Savage, Valkyrie and Cheng, Lung-Pan and Peng, Huaishu and Kim, Hyunyoung Background: 3D prints often lack integrated functionalities like strength and sensing. Purpose: The research aims to embed functional fibers into low-cost 3D prints. System: Introduce 'Rhapso', a system modifying FFF printers to embed fibers during printing. Evaluation: Used motor-controlled fiber spool and parsing software for precise fiber placement. Result: Functional prints with actuation and sensing without manual intervention enabled.
  121. Speed-Modulated Ironing: High- Resolution Shade and Texture Gradients in Single-Material

    3D Ozdemir, Mehmet and AlAlawi, Marwa and Dogan, Mustafa Doga and Martinez Castro, Jose Francisco and Mueller, Stefanie and Doubrovski, Zjenja Background: Single-material 3D printing lacks high- resolution visual and tactile control. Purpose: Overcome low resolution in single-material 3D printing of shades and textures. System: Speed-Modulated Ironing uses dual nozzles for controlled temperature effects. Evaluation: Method tested with three materials showing temperature-induced visual changes. Result: Achieves detailed surface features such as text and QR codes on prints.
  122. TRAvel Slicer: Continuous Extrusion Toolpaths for 3D Printing Gould, Jaime

    and Friedman-Gerlicz, Camila and Buechley, Leah Background: Traditional slicing includes unproductive non-extrusion movements. Purpose: Reduce extruder travel to improve printing across more printers and materials. System: TRAvel Slicer minimizes inner and outer model travel through sequential pathing. Evaluation: 3D printing trials with CeraMetal and plastic were conducted to show advancements. Result: TRAvel efficiently prints unprintable materials and improves printing with plastic.
  123. Facilitating the Parametric Definition of Geometric Properties in Programming- Based

    CAD Gonzalez, J Felipe and Pietrzak, Thomas and Girouard, Audrey and Casiez, G'{e}ry Background: Parametric CAD allows customization but requires complex programming. Purpose: Programming-based CAD lacks assistance for creating parametric designs. System: Users can retrieve parametric expressions from visual models for code reuse. Evaluation: A proof-of-concept was tested with 11 users in OpenSCAD CAD application. Result: The solution improves interactivity and lowers the entry barrier for CAD users.
  124. Understanding and Supporting Debugging Workflows in CAD H"{a}hnlein, Felix and

    Bernstein, Gilbert and Schulz, Adriana Background: Parametric CAD allows edits but reference errors make it challenging. Purpose: Helping users debug reference errors in CAD models is significantly needed. System: DeCAD, a tool, aids users by visualizing CAD changes to debug errors. Evaluation: A lab study evaluated DeCAD by addressing user challenges and workflow strategies. Result: Findings suggest design implications for future debugging tool development.
  125. SonoHaptics: An Audio-Haptic Cursor for Gaze-Based Object Selection in XR

    Cho, Hyunsung and Sendhilnathan, Naveen and Nebeling, Michael and Wang, Tianyi and Padmanabhan, Purnima and Browder, Jonathan and Lindlbauer, David and Jonker, Tanya R. and Todi, Kashyap Background: XR gaze selection often lacks accurate visual feedback. Purpose: Address accuracy in gaze-based selection without visual feedback in XR. System: SonoHaptics maps visual to audio-haptic features for object selection. Evaluation: Compared effectiveness in cluttered scenes without visual feedback. Result: SonoHaptics improves object identification and selection accuracy.
  126. Auptimize: Optimal Placement of Spatial Audio Cues for Extended Reality

    Cho, Hyunsung and Wang, Alexander and Kartik, Divya and Xie, Emily Liying and Yan, Yukang and Lindlbauer, David Background: Spatial audio in XR enhances user awareness of virtual elements' positions. Purpose: The study aims to tackle localization errors in XR due to auditory limitations. System: Auptimize optimally relocates sound sources to reduce identification errors. Evaluation: Auptimize was tested for reduction of audio- based source identification errors. Result: Auptimize effectively reduces errors in spatial audio cue identification.
  127. Towards Music-Aware Virtual Assistants Wang, Alexander and Lindlbauer, David and

    Donahue, Chris Background: Spoken notifications enhance hands-free tasks but may intrude on music listening. Purpose: Addressing intrusive interruptions by making notifications music-sensitive is crucial. System: Music-aware assistants modify notifications to harmonize with user's music. Evaluation: A user study compared musical assistants to standard ones for intrusiveness. Result: Musical assistants felt more suitable, less intrusive, and more delightful.
  128. SonifyAR: Context-Aware Sound Generation in Augmented Reality Su, Xia and

    Froehlich, Jon E. and Koh, Eunyee and Xiao, Chang Background: Sound is vital for immersiveness in AR but is limited by authoring challenges. Purpose: Address limits of AR sound authoring due to interaction and context specification issues. System: Introduces SonifyAR, a system using LLM to create context-aware AR sounds. Evaluation: Conducted a study with 8 users and developed 5 AR applications to test system usability. Result: SonifyAR enhances AR by expanding sound design space through automatic context processing.
  129. EarHover: Mid-Air Gesture Recognition for Hearables Using Sound Leakage Signals

    Suzuki, Shunta and Amesaka, Takashi and Watanabe, Hiroki and Shizuki, Buntarou and Sugiura, Yuta Background: Mid-air gestures can keep hearables and hands clean by eliminating touch. Purpose: Current methods need improvements for hearables' mid-air gesture recognition. System: EarHover uses sound leakage with a mic and speaker for gesture recognition. Evaluation: Two device types were tested for gesture detection in real-world scenarios. Result: Seven gestures were identified as suitable, ensuring signal clarity and user acceptance.
  130. Natural Expression of a Machine Learning Model's Uncertainty Through Verbal

    and Non-Verbal Behavior of Schmidt, Susanne and Rolff, Tim and Voigt, Henrik and Offe, Micha and Steinicke, Frank Background: AI lacks natural uncertainty cues crucial for human communication. Purpose: To merge ML with human-likeness in expressing unreliable responses. System: Creating a system to express uncertainty cues through virtual agents. Evaluation: Trained model analyzed for fidelity and generalizability in behavior. Result: Model effectively generated perceived uncertainty in agent behavior.
  131. Who Validates the Validators? Aligning LLM-Assisted Evaluation of LLM Outputs

    with Human Preferences Shankar, Shreya and Zamfirescu-Pereira, J.D. and Hartmann, Bjoern and Parameswaran, Aditya and Arawjo, Ian Background: LLMs assist human evaluation but inherit LLM issues, needing validation. Purpose: The research addresses alignment of LLM evaluation with human preferences. System: EvalGen helps generate and validate evaluation criteria, involving user feedback. Evaluation: EvalGen iteratively refines criteria by grading LLM outputs for user alignment. Result: EvalGen shows promise but reveals subjective, iterative criteria development.
  132. LlamaTouch: A Faithful and Scalable Testbed for Mobile UI Task

    Automation Zhang, Li and Wang, Shihe and Jia, Xianqing and Zheng, Zhihan and Yan, Yunhe and Gao, Longxi and Li, Yuanchun and Xu, Mengwei Background: Current mobile UI automation evaluations are unscalable and unfaithful. Purpose: The study tackles limitations in evaluating mobile UI task automation's efficacy. System: LlamaTouch offers scalable, faithful evaluation by focusing on UI state traversal. Evaluation: Incorporates four agents and 496 tasks for comprehensive, real-world evaluation. Result: LlamaTouch is more faithful and scalable, simplifying task annotation and agent integration.
  133. Clarify: Improving Model Robustness With Natural Language Corrections Lee, Yoonho

    and Lam, Michelle S. and Vasconcelos, Helena and Bernstein, Michael S. and Finn, Chelsea Background: Models learn wrong concepts from misleading signals in data. Purpose: Research aims to correct model misconceptions beyond training data. System: Clarify lets users fix models by describing failure patterns textually. Evaluation: User studies and a case study on ImageNet validate Clarify's efficacy. Result: Non-experts can boost performance by describing misconceptions via Clarify.
  134. "The Data Says Otherwise" — Towards Automated Fact-checking and Communication

    of Data Claims Fu, Yu and Guo, Shunan and Hoffswell, Jane and S. Bursztyn, Victor and Rossi, Ryan and Stasko, John Background: Manual fact-checking of data claims is tedious and can be intractable. Purpose: The research aims to automate the fact- checking process for data claims. System: Aletheia is a prototype to automate data verification and enhance evidence communication. Evaluation: Evaluated performance on 400 data claims and compared data presentation forms with users. Result: Reveals insights into LLMs' feasibility and pros/cons of visualizations vs tables.
  135. LoopBot: Representing Continuous Haptics of Grounded Objects in Room- scale

    VR Ikeda, Tetsushi and Fujita, Kazuyuki and Ogawa, Kumpei and Takashima, Kazuki and Kitamura, Yoshifumi Background: Continuous haptic feedback in VR is limited by the user's walking range and force needs. Purpose: The research aims to solve continuous haptic feedback challenges with a single robot solution. System: LoopBot uses a loop-shaped haptic prop on a robot to mimic grounded objects for VR users. Evaluation: A performance evaluation measured prop scrolling's effect on canceling 77.5% of robot speed. Result: User tests showed a marked increase in realism and perceived grounding with scrolling.
  136. JetUnit: Rendering Diverse Force Feedback in Virtual Reality Using Water

    Jets Zhang, Zining and Li, Jiasheng and Yan, Zeyu and Nishida, Jun and Peng, Huaishu Background: VR force feedback lacks intensity and frequency variety, needing a dry solution. Purpose: Develop a VR haptic device producing intense force feedback via water jets. System: JetUnit optimizes parameters to mimic water jet force while keeping users dry. Evaluation: Key design determined via quantitative experiments & user perception studies. Result: JetUnit enhances VR immersion by delivering diverse force feedback sensations.
  137. Selfrionette: A Fingertip Force-Input Controller for Continuous Full-Body Avatar Manipulation

    and Diverse Hashimoto, Takeru and Hirao, Yutaro Background: VR interaction lacks natural movements and haptic feedback. Purpose: This research tackles the need for intuitive VR avatar control and accurate haptics. System: Selfrionette uses fingertip force for VR avatar movement and haptic interaction. Evaluation: Two user studies tested usability, embodiment, and haptic perception. Result: Selfrionette matched body tracking in realism, enabling enhanced VR experiences.
  138. SpinShot: Optimizing Both Physical and Perceived Force Feedback of Flywheel-Based,

    Directional Impact Fan, Chia-An and Wu, En-Huei and Cheng, Chia-Yu and Chang, Yu-Cheng and Lopez, Alvaro and Chen, Yu and Chi, Chia-Chen and Chan, Yi-Sheng and Tsai, Ching-Yi and Chen, Mike Y. Background: Ungrounded force feedback technologies are significantly weaker than real-world forces. Purpose: To develop a device generating strong, directional impulse feedback in handheld devices. System: SpinShot uses a flywheel and solenoid to deliver 22Nm directional impulses in 1ms. Evaluation: Studies compared SpinShot to moving mass and air jets in a VR baseball game. Result: SpinShot outperformed others in realism and magnitude but reduced comfort due to weight.
  139. StreetNav: Leveraging Street Cameras to Support Precise Outdoor Navigation for

    Blind Pedestrians Jain, Gaurav and Hindi, Basel and Zhang, Zihao and Srinivasula, Koushik and Xie, Mingyu and Ghasemi, Mahshid and Weiner, Daniel and Paris, Sophie Ana and Xu, Xin Yi Therese and Malcolm, Michael and Turkcan, Background: GPS inaccuracy hinders BLV individuals' outdoor navigation. Purpose: This research explores using street cameras for precise BLV navigation. System: StreetNav repurposes existing cameras with computer vision for navigation aid. Evaluation: Engagements with stakeholders and technical evaluation of StreetNav were conducted. Result: StreetNav offers precise guidance but is affected by occlusions and camera distance.
  140. WorldScribe: Towards Context-Aware Live Visual Descriptions Chang, Ruei-Che and Liu,

    Yuxuan and Guo, Anhong Background: Automated visual descriptions support blind users' independence in navigation. Purpose: Challenge exists to provide rich, contextual, and timely descriptions for blind people. System: WorldScribe creates customizable visual descriptions adaptive to user context. Evaluation: Evaluation included a user study with blind participants and system pipeline testing. Result: WorldScribe delivers adaptive, real-time visual descriptions enhancing environment understanding.
  141. CookAR: Affordance Augmentations in Wearable AR to Support Kitchen Tool

    Interactions for People with Low Vision Lee, Jaewook and Tjahjadi, Andrew D. and Kim, Jiho and Yu, Junpu and Park, Minji and Zhang, Jiawen and Froehlich, Jon E. and Tian, Yapeng and Zhao, Yuhang Background: Cooking is vital but challenging for people with low vision due to safety issues. Purpose: The paper aims to aid those with low vision in interacting safely with kitchen tools. System: CookAR, a wearable AR system, offers real-time affordance augmentations for kitchen tools. Evaluation: The system was tested through a technical evaluation and a lab study with 10 LV participants. Result: Users preferred affordance augmentations over whole object augmentations, showing efficacy.
  142. DesignChecker: Visual Design Support for Blind and Low Vision Web

    Developers Huh, Mina and Pavel, Amy Background: BLV developers face challenges reviewing and improving web designs. Purpose: The research aims to enhance BLV developers' website usability for sighted users. System: DesignChecker helps BLV developers evaluate and improve web designs against standards. Evaluation: User study (N=8) tested participants' ability to find design errors with DesignChecker. Result: Participants identified more visual design errors and were eager to use DesignChecker.
  143. Patterns of Hypertext-Augmented Sensemaking Zhu, Siyi and Haisfield, Robert and

    Langen, Brendan and Chan, Joel Background: Early HCI saw hypertext as transformative; now it's seen as less relevant. Purpose: Study resurgence and design new tools for hypertext-augmented sensemaking. System: Guide tours reveal patterns in revisiting and reusing ideas for scholarly sensemaking. Evaluation: Analyzed patterns from 23 scholars' guided tours to validate these observations. Result: Patterns extend known designs, suggesting new hypertext design opportunities.
  144. Augmented Physics: Creating Interactive and Embedded Physics Simulations from Static

    Textbook Gunturu, Aditya and Wen, Yi and Zhang, Nandi and Thundathil, Jarin and Kazi, Rubaiat Habib and Suzuki, Ryo Background: Physics learning lacks interactive tools, posing challenges for students. Purpose: The research addresses creating interactive simulations from textbook diagrams. System: The tool uses machine learning to convert static diagrams into interactive ones. Evaluation: Evaluated via technical tests, usability study, and expert interviews with N=12. Result: Findings show improved engagement and personalization in physics education.
  145. Qlarify: Recursively Expandable Abstracts for Dynamic Information Retrieval over Scientific

    Papers Fok, Raymond and Chang, Joseph Chee and August, Tal and Zhang, Amy X. and Weld, Daniel S. Background: Scientific abstracts often lack detail, leading to reader difficulties. Purpose: To address cognitive gaps when transitioning from abstract to full text. System: Recursively expandable abstracts integrate additional text dynamically. Evaluation: User studies show utility of LLM-facilitated expandable abstracts. Result: Future opportunities exist for LLM-based exploration with minimal effort.
  146. LessonPlanner: Assisting Novice Teachers to Prepare Pedagogy-Driven Lesson Plans with

    Large Language Fan, Haoxiang and Chen, Guanzheng and Wang, Xingbo and Peng, Zhenhui Background: Novice teachers find creating lesson plans challenging yet beneficial. Purpose: The research aims to use LLMs to simplify and enhance lesson plan preparation. System: LessonPlanner helps construct plans with LLM- generated content and Gagne’s events. Evaluation: A within-subjects study compared LessonPlanner to ChatGPT in improving lesson plans. Result: LessonPlanner significantly enhances lesson plan quality and reduces preparation workload.