Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Distributed Collaborative Construction in Mixed Reality

Distributed Collaborative Construction in Mixed Reality

Distributed collaboration, portable mobile applications, natural user interfaces and comprehensive systems have been identified as future research directions in recent reviews about mixed reality in construction. On the other hand, current research in the mixed reality field addresses movement and anthropometric realism as critical success factors for an immersive virtual environment. Advances in object tracking, online (human) 3D reconstruction and gestural interfaces accompanied by wearable mobile displays provide us with the technological base to contribute to the challenges in both areas. In this paper, we propose a comprehensive immersive environment for a distributed collaborative construction process in a mixed reality setup. Participants on remote sites, solely equipped with smart see-through glasses, are cooperating in the construction of a virtual 3D model combining real (tangibles) and virtual objects. We consider our solution to give most suitable support for a distributed collaborative construction task by increasing the immersion of the environment, i.e.: (1) creating the impression of real collaboration by mirroring the behavior of participants in a common virtual scene; (2) providing more natural interaction through freehand gestures; (3) increasing the physical experience of the user through wearable 3D displays and construction with tangibles.

Christian Blank

February 25, 2015
Tweet

More Decks by Christian Blank

Other Decks in Technology

Transcript

  1. distributed collaborative construction in
    mixed reality
    Christian Blank, Malte Eckhoff, Iwer Petersen,
    Raimund Wege and Birgit Wendholt
    February 25, 2015
    Immersive Interactive Environments (I2E),
    University of Applied Science Hamburg

    View Slide

  2. Research Statement
    Related Work
    System Overview
    Components
    Current State & Future Work
    Blank, Eckhoff, Petersen, Wege, Wendholt 1

    View Slide

  3. research statement

    View Slide

  4. research statement
    Figure: I2E Working Environment.
    ∙ Distributed collaboration
    ∙ Mixed reality construction
    ∙ Constraint-based construction
    ∙ Physical / gestural interaction
    ∙ Full range of motion
    ∙ Virtual presence
    Blank, Eckhoff, Petersen, Wege, Wendholt 3

    View Slide

  5. related work

    View Slide

  6. physical collaboration in mixed reality
    Figure: MirageTable Scenarios [BJW12].
    + Distributed collaboration
    + Mixed reality construction
    + Physical interaction
    (+) Virtual presence - front view
    – Gestural interaction
    – Constraint-based construction
    – Range of motion
    Blank, Eckhoff, Petersen, Wege, Wendholt 5

    View Slide

  7. prototyping in mixed reality
    Figure: MixFab [WLK+14].
    + Mixed reality construction
    + Gestural interaction
    – Distributed collaboration
    – Constraint-based construction
    – Physical interaction
    – Range of motion
    – Virtual presence
    Blank, Eckhoff, Petersen, Wege, Wendholt 6

    View Slide

  8. system overview

    View Slide

  9. system overview
    Current State
    Mobile Display
    and Positioning
    Scene
    Visualization
    Construction
    Logic
    Tangible
    Tracking
    Gesture
    Recognition
    Reconstruction
    Hamburg
    Internet
    New York
    Paris
    Hamburg
    System at
    Location
    Sensor-
    Component
    Logic-
    Component
    Output-
    Component
    WAN Connection
    LAN Connection
    1. 2.
    Figure: (1) Client instance and component data flow. (2) Client instance distribution.
    Blank, Eckhoff, Petersen, Wege, Wendholt 8

    View Slide

  10. components

    View Slide

  11. construction logic using constraints
    BUILDING
    BLOCK
    CONSTRUCT
    1. 2.
    BUILDING
    BLOCK
    CONSTRUCT
    3.
    Connection point
    with constraints
    Building Blocks
    Figure: (1) Connection points for possible joints of building blocks and the construct. (2)
    Composite construct. (3) Joining the composite construct and building blocks.
    Blank, Eckhoff, Petersen, Wege, Wendholt 10

    View Slide

  12. tangible tracking and mixed construction
    Figure: Rotation invariant marker-based tracking of cubes that represent building blocks.
    Blank, Eckhoff, Petersen, Wege, Wendholt 11

    View Slide

  13. gesture recognition
    Physical Gestures
    Figure: Physically correct interaction with virtual objects.
    Interpreted Gestures
    ∙ Movement of user in 3d space
    ∙ Template-based matching algorithm similar to [KNQ12]
    ∙ Recognized gestures can mapped to commands
    Blank, Eckhoff, Petersen, Wege, Wendholt 12

    View Slide

  14. gesture recognition
    Figure: Gesture recognition pipeline.
    Blank, Eckhoff, Petersen, Wege, Wendholt 13

    View Slide

  15. visualization, mobile display and positioning
    Mobile Device 3D Scene Renderer
    viewport field of view
    viewport resolution
    viewport transform
    rendered scene
    tag-based positioning
    place camera
    picture here
    see-through glasses
    1.
    3.
    2.
    Figure: (1) Marker-based positioning of the mobile display. (2) Scene rendering for the mobile
    perspective. (3) Mixed reality view.
    Blank, Eckhoff, Petersen, Wege, Wendholt 14

    View Slide

  16. user reconstruction
    Figure: (1) Scanning setup using four
    cameras. (2) Resulting depth images.
    ∙ Visual support for virtual
    collaboration [MBS+11]
    ∙ Online 3D reconstruction
    ∙ Reconstruction from
    multiple depth images
    Blank, Eckhoff, Petersen, Wege, Wendholt 15

    View Slide

  17. user reconstruction
    Depth- &
    Colorimage
    source
    Mesh-
    Processing
    Mesh-
    Streaming
    Image-
    processing
    Point cloud
    processing
    3.
    1. 2.
    Figure: Above: reconstruction pipeline. Below: Single camera reconstruction result.
    Blank, Eckhoff, Petersen, Wege, Wendholt 16

    View Slide

  18. current state & future work

    View Slide

  19. current state
    Middleware Intra-client communication tested
    Construction logic Implemented and tested
    Tangible tracking Implemented and tested
    Gesture recognition Physical gestures implemented and
    under test
    User reconstruction Pipeline tested using single camera
    Mobile, Display and Positioning Dataflow between mobile and renderer
    established
    Blank, Eckhoff, Petersen, Wege, Wendholt 18

    View Slide

  20. future work
    Middleware Inter-client communication must be
    implemented
    Tangible tracking Using approach without markers
    Gesture recognition Combination and evaluation of concept
    User reconstruction Multi-camera calibration and handling
    Mobile, Display and Positioning Using head-tracking for viewport
    calculation
    Blank, Eckhoff, Petersen, Wege, Wendholt 19

    View Slide

  21. Questions?
    Slides: http://bit.ly/1Lyuvnn
    Blank, Eckhoff, Petersen, Wege, Wendholt 20

    View Slide

  22. references
    Hrvoje Benko, Ricardo Jota, and Andrew Wilson.
    Miragetable: Freehand interaction on a projected augmented reality tabletop.
    In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’12, pages 199–208, New York, NY, USA, 2012.
    ACM.
    Per Ola Kristensson, Thomas Nicholson, and Aaron Quigley.
    Continuous recognition of one-handed and two-handed gestures using 3d full-body motion tracking sensors.
    In Proceedings of the 2012 ACM International Conference on Intelligent User Interfaces, IUI ’12, pages 89–92, New York, NY, USA,
    2012. ACM.
    Erin A. McManus, Bobby Bodenheimer, Stephan Streuber, Stephan de la Rosa, Heinrich H. Bülthoff, and Betty J. Mohler.
    The influence of avatar (self and character) animations on distance estimation, object interaction and locomotion in immersive
    virtual environments.
    In Proceedings of the ACM SIGGRAPH Symposium on Applied Perception in Graphics and Visualization, APGV ’11, pages 37–44, New
    York, NY, USA, 2011. ACM.
    Christian Weichel, Manfred Lau, David Kim, Nicolas Villar, and Hans W. Gellersen.
    Mixfab: A mixed-reality environment for personal fabrication.
    In Proceedings of the 32Nd Annual ACM Conference on Human Factors in Computing Systems, CHI ’14, pages 3855–3864, New York,
    NY, USA, 2014. ACM.
    Blank, Eckhoff, Petersen, Wege, Wendholt 21

    View Slide