Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Multi-Kinect motion tracking and Iteractive Installations

Multi-Kinect motion tracking and Iteractive Installations

Delivered as a masterclass at Nucl.AI 2015 conference. I focused on ways to deal with erratic and noisy input signals, coordinating multiple Kinect sensors via RabbitMQ and offloading heavy processing to multiple computers.

The video currently available on the Nucl.AI archives for conference attendees and AIGameDev subscribers.

Ricardo J. Méndez

July 20, 2015
Tweet

More Decks by Ricardo J. Méndez

Other Decks in Technology

Transcript

  1. @ArgesRic
    MULTIPLE KINECTS
    AND INTERACTIVE INSTALLATIONS
    Ricardo J. Méndez

    View full-size slide

  2. @ArgesRic
    WHO AM I?
    • Ricardo J. Méndez
    • Founder, Numergent
    • E-mail: [email protected]
    • Twitter: @ArgesRic

    View full-size slide

  3. @ArgesRic
    WHAT DO I DO?
    • Work with media agencies
    • Data visualization and interactive installations
    • Previously:
    • Data analysis for banking, healthcare
    • Game development

    View full-size slide

  4. @ArgesRic
    TALK STRUCTURE
    • Show a bit about the installations
    • Give a high level overview
    • Talk about using multiple networked Kinects
    • Get into nitty gritty of avateering gotchas, technical and design

    View full-size slide

  5. @ArgesRic
    (http://www.penny-arcade.com/comic/2011/03/25)

    View full-size slide

  6. @ArgesRic
    INTERACTIVE INSTALLATIONS?
    • Gigantic, in-place, Kinect-powered mini-games
    • 8mx3m screens made out of MicroTiles
    • Promotional in nature, advertising oriented
    • Look cool, easy to understand, grab the user’s attention
    • Months of work for something players will experience in 30-60 seconds.

    View full-size slide

  7. @ArgesRic
    HIGH LEVEL OVERVIEW

    View full-size slide

  8. @ArgesRic
    HIGH LEVEL OVERVIEW
    • 3D application done in Unity 4
    • Multiple Kinects, 3 on the 1st gen, 2 on 2nd gen
    • Distributed team - New York, Hamburg, Berlin, Bucharest
    • 4,000 autonomous agents

    View full-size slide

  9. @ArgesRic
    WHY SO MANY AGENTS?
    • Working with vague concepts like “the power of the network”.
    • Meant to represent data.
    • Flashes through the streams to represent communication.
    • Each agent acts independently to provide more visual variety.

    View full-size slide

  10. @ArgesRic
    UNITYSTEER
    • MIT-Licensed
    • https://github.com/ricardojmendez/UnitySteer
    • Started as a port of Craig Reynolds’ OpenSteer
    • Each background particle is a UnitySteer agent making its own
    decisions and signaling to neighbors

    View full-size slide

  11. @ArgesRic
    WHY MULTIPLE NETWORKED KINECTS?

    View full-size slide

  12. @ArgesRic
    DESIGN REQUIREMENTS

    View full-size slide

  13. @ArgesRic
    TECHNICAL LIMITATIONS
    • First gen. installation called for 3 Kinects
    • … but the Kinect 1 SDK only supported two per machine.
    • Second gen. installation called for 2 Kinects
    • … but the Kinect 2 SDK supports only one per machine.

    View full-size slide

  14. @ArgesRic
    SOLUTION: KINECT REMOTE
    • Kinect Remote: https://github.com/ricardojmendez/Kinect2Remote
    • MIT-licensed .Net Application and client library
    • Processes depth and body information, sends it in protobuf format to
    a RabbitMQ server

    View full-size slide

  15. @ArgesRic
    KINECT REMOTE
    • Single package per data type per frame.
    • Multiple queues, one per data type.
    • Expires at 35ms - we can interpolate.
    • Allows for “Body Processors”, which see (and can tag) all data before
    it gets stuffed into a “Body Bag”.

    View full-size slide

  16. @ArgesRic
    WHAT DO THE BODY PROCESSORS DO?
    • Passer-by detection
    • User selection
    • Deciding if to hold the user
    • Limb ambiguity
    • Joint velocity calculation
    • Runs gesture recognition

    View full-size slide

  17. @ArgesRic
    GESTURE RECOGNITION
    • Discrete and continuous gestures
    • Both generations had gesture controls
    • 1st gen. used heuristics
    • 2nd gen. uses Microsoft’s gesture recognizer

    View full-size slide

  18. @ArgesRic
    A BRIEF OVERVIEW OF THE
    INSTALLATIONS

    View full-size slide

  19. @ArgesRic
    FIRST GENERATION
    • One passer-by per Kinect, represented by an agent cloud
    • Users play with data (simple interaction)
    • Avatar made out of particles plucked from the stream
    • Avatar eventually dissolves and goes back to the stream
    • Pure avateering

    View full-size slide

  20. @ArgesRic
    AVATAR GENERATION

    View full-size slide

  21. @ArgesRic
    AVATAR GENERATION

    View full-size slide

  22. @ArgesRic
    SECOND GENERATION
    • Six passer-bys per Kinect, can play with the background streams.
    • More game oriented - much more complex interaction.
    • Three mini-games: music jam, football, hang-glider.
    • Mixed avateering, animation, inverse kinematics

    View full-size slide

  23. @ArgesRic
    SECOND GENERATION
    • Six passer-bys per Kinect, can play wit the background streams.
    • More game oriented - much more complex interaction.
    • Three mini-games: music jam, football, hang-glider.
    • Mixed avateering, animation, inverse kinematics

    View full-size slide

  24. @ArgesRic
    AVATEERING CONSIDERATIONS

    View full-size slide

  25. @ArgesRic
    CONSIDERATIONS
    • Visual design affects what you get to do with a model.

    View full-size slide

  26. @ArgesRic
    CONSIDERATIONS

    View full-size slide

  27. @ArgesRic
    CONSIDERATIONS
    • Visual design affects what you get to do with a model.
    • Angular, faceted characters are the last sort of avatar that you
    want.

    View full-size slide

  28. @ArgesRic
    CONSIDERATIONS

    View full-size slide

  29. @ArgesRic
    CONSIDERATIONS
    • Visual design affects what you get to do with a model.
    • Abstract, faceted characters are the last sort of avatar that you want.
    • You’ll need to deal with expectations, in particular, with how
    directly you can apply Kinect data.

    View full-size slide

  30. @ArgesRic
    LIMB ORIENTATION
    • Assumption: body orientation data is a series of transforms in a
    hierarchy.
    • Reality: it’s only the direction that the next joint is at.
    • Remember: what Kinect sees is a shadow.

    View full-size slide

  31. @ArgesRic
    CONSIDERATIONS
    • Visual design affects what you get to do with a model.
    • Angular, faceted characters are the last sort of avatar that you want.
    • You’ll need to deal with expectations, in particular, with how directly
    you can apply Kinect data.
    • You’ll trade avateering freedom for reasonable constraints.

    View full-size slide

  32. @ArgesRic
    (STILL ON FREEDOM VS. AVATEERING)
    • Close to launch, we get a report that the upper body is fine, but legs
    are kicking around like crazy
    • Client wants users to be able to lift their legs and kick, so we can’t
    just lock them down

    View full-size slide

  33. @ArgesRic
    CONSIDERATIONS
    • Visual design affects what you get to do with a model.
    • Angular, faceted characters are the last sort of avatar that you want.
    • You’ll need to deal with expectations, in particular, with how directly you can apply
    Kinect data.
    • You’ll trade avateering freedom for reasonable constraints.
    • The more like a person your avatar looks, the closer players will expect to be
    mimicked.

    View full-size slide

  34. @ArgesRic
    CONSIDERATIONS
    • Visual design affects what you get to do with a model.
    • Angular, faceted characters are the last sort of avatar that you want.
    • You’ll need to deal with expectations, in particular, with how directly you can apply Kinect
    data.
    • You’ll trade avateering freedom for reasonable constraints.
    • The more like a person your avatar looks, the closer players will expect to be mimicked.
    • The more like a person your avatar looks, the closer players will mimic it.

    View full-size slide

  35. @ArgesRic
    CONCLUSIONS

    View full-size slide

  36. @ArgesRic
    CONCLUSIONS
    • If you need to do avateering, try to let designers do the tweaking by
    providing them with tools. Avatar design may interfere with this.
    • Gesture recognition works great.
    • Combining animation and avateering is a great way to convey an
    impression to your users.
    • You’ll need to do a lot of data massaging. Make sure you schedule for it.

    View full-size slide

  37. @ArgesRic
    QUESTIONS?

    View full-size slide

  38. @ArgesRic
    THANKS!
    https://numergent.com/talks/

    View full-size slide