Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Predator-Prey Vision Metaphor for Multi-Tasking Virtual Environments

Predator-Prey Vision Metaphor for Multi-Tasking Virtual Environments

3DUI 2012

Anton Treskunov

June 08, 2012
Tweet

More Decks by Anton Treskunov

Other Decks in Research

Transcript

  1. Predator-Prey Vision Metaphor for Multi-Tasking Virtual Environments A. Sherstyuk1 A.

    Treskunov2 M. Gavrilova3 1University of Hawaii 2Samsung 3University of Calgary March 5, 2012 1 / 11
  2. Human eyes Eyes is a flexible visual instrument, tightly coupled

    with Proprioception. Together, they create continuos image of reality. 2 / 11
  3. HMD based VR Range of eye movements is restricted by

    display FOV, which is typically narrow. That is typically compensated by Increased head rotation for wide-range viewing tasks; Positioning object at FOV center for near range interaction. The later often fails at close distances and users resort to monoscopic viewing. 3 / 11
  4. Predator-Prey metaphor Different tasks and conditions calls for different types

    of rendering optimization. Search vs Manipulation; Wide FOV vs Correct Depth Perception; Prey vs Predator. 4 / 11
  5. Predator/Prey vision and Real and Virtual worlds Humans routinely use

    both modes. In VR non-panoramic HMDs enforce predator mode. Is it possible to recognize the current task and adjust accordingly to provide optimal viewing conditions? 5 / 11
  6. Previous work Multiple studies demonstrated that restricted FOV has negative

    impact on perception in VR. Compensation by non-linear mapping between head and virtual camera (Prey) Amplified head rotation Horizontal direction – 21% increase in performance [Jay and Hubbold 2003] Pitch and Roll – 25% reduction of user efforts during visual search [Bolte et al. 2010] Caveats Nulling and Directional compliance; Inability to operate own virtual hand. View Sliding – 50 % increase in performance for certain tasks [Sherstyuk et al. 2011] Dynamic camera convergence for near-range viewing (Predator) [State et al. 2001, Sherstyuk and State 2010] 6 / 11
  7. Implementation 1: Task-dependent camera control Close-range viewing (predator mode) -

    automatic camera convergence Wide-range viewing (prey mode) - simulated divergence B A C D States: (A) Conventional – general purpose viewing; (B) Near field – convergence on the point of fixation; (C) Wide area – temporally divergence to "peek outside"; (D) "Cyclopic camera" – working with UI, pointing, etc 7 / 11
  8. Implementation II: Mode switching Assumptions: There is at least one

    virtual hand; Neck rotation is known. B A C D Transition Conditions A → B 1. Virtual hand is visible in both cameras 2. Neck rotation ≤ T threshold A → C 3. Virtual hand is not visible 4. Neck rotation ≤ T threshold A → D 5. Virtual hand is used as a pointer (extended and visibl 8 / 11
  9. Proof of Concept Prototype VR-Triage setup for teaching first responders;

    Tasks: search, examination, treatment, awareness; desktop based implementation. divergence 12 degrees 9 / 11
  10. Discussion System that implements predator-pray vision metaphor; Divergence approach to

    wide area viewing; Non-immersive proof of concept; Temporal divergence up to 10-15 degrees seems to be OK; Since mode switching coupled with motions, we expect self-regulating effect. Immersive user test is required. Possible testbeds: Triage with enemies; Elimination game. Thank you! 11 / 11