Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Multi-Kinect motion tracking and Iteractive Installations

Multi-Kinect motion tracking and Iteractive Installations

Delivered as a masterclass at Nucl.AI 2015 conference. I focused on ways to deal with erratic and noisy input signals, coordinating multiple Kinect sensors via RabbitMQ and offloading heavy processing to multiple computers.

The video currently available on the Nucl.AI archives for conference attendees and AIGameDev subscribers.

Ricardo J. Méndez

July 20, 2015
Tweet

More Decks by Ricardo J. Méndez

Other Decks in Technology

Transcript

  1. @ArgesRic
    MULTIPLE KINECTS
    AND INTERACTIVE INSTALLATIONS
    Ricardo J. Méndez

    View Slide

  2. @ArgesRic
    WHO AM I?
    • Ricardo J. Méndez
    • Founder, Numergent
    • E-mail: [email protected]
    • Twitter: @ArgesRic

    View Slide

  3. @ArgesRic
    WHAT DO I DO?
    • Work with media agencies
    • Data visualization and interactive installations
    • Previously:
    • Data analysis for banking, healthcare
    • Game development

    View Slide

  4. @ArgesRic
    TALK STRUCTURE
    • Show a bit about the installations
    • Give a high level overview
    • Talk about using multiple networked Kinects
    • Get into nitty gritty of avateering gotchas, technical and design

    View Slide

  5. @ArgesRic
    (http://www.penny-arcade.com/comic/2011/03/25)

    View Slide

  6. @ArgesRic
    INTERACTIVE INSTALLATIONS?
    • Gigantic, in-place, Kinect-powered mini-games
    • 8mx3m screens made out of MicroTiles
    • Promotional in nature, advertising oriented
    • Look cool, easy to understand, grab the user’s attention
    • Months of work for something players will experience in 30-60 seconds.

    View Slide

  7. @ArgesRic

    View Slide

  8. @ArgesRic
    HIGH LEVEL OVERVIEW

    View Slide

  9. @ArgesRic
    HIGH LEVEL OVERVIEW
    • 3D application done in Unity 4
    • Multiple Kinects, 3 on the 1st gen, 2 on 2nd gen
    • Distributed team - New York, Hamburg, Berlin, Bucharest
    • 4,000 autonomous agents

    View Slide

  10. @ArgesRic
    WHY SO MANY AGENTS?
    • Working with vague concepts like “the power of the network”.
    • Meant to represent data.
    • Flashes through the streams to represent communication.
    • Each agent acts independently to provide more visual variety.

    View Slide

  11. @ArgesRic
    UNITYSTEER
    • MIT-Licensed
    • https://github.com/ricardojmendez/UnitySteer
    • Started as a port of Craig Reynolds’ OpenSteer
    • Each background particle is a UnitySteer agent making its own
    decisions and signaling to neighbors

    View Slide

  12. @ArgesRic
    WHY MULTIPLE NETWORKED KINECTS?

    View Slide

  13. @ArgesRic
    DESIGN REQUIREMENTS

    View Slide

  14. @ArgesRic
    TECHNICAL LIMITATIONS
    • First gen. installation called for 3 Kinects
    • … but the Kinect 1 SDK only supported two per machine.
    • Second gen. installation called for 2 Kinects
    • … but the Kinect 2 SDK supports only one per machine.

    View Slide

  15. @ArgesRic
    SOLUTION: KINECT REMOTE
    • Kinect Remote: https://github.com/ricardojmendez/Kinect2Remote
    • MIT-licensed .Net Application and client library
    • Processes depth and body information, sends it in protobuf format to
    a RabbitMQ server

    View Slide

  16. @ArgesRic
    KINECT REMOTE
    • Single package per data type per frame.
    • Multiple queues, one per data type.
    • Expires at 35ms - we can interpolate.
    • Allows for “Body Processors”, which see (and can tag) all data before
    it gets stuffed into a “Body Bag”.

    View Slide

  17. @ArgesRic
    WHAT DO THE BODY PROCESSORS DO?
    • Passer-by detection
    • User selection
    • Deciding if to hold the user
    • Limb ambiguity
    • Joint velocity calculation
    • Runs gesture recognition

    View Slide

  18. @ArgesRic
    GESTURE RECOGNITION
    • Discrete and continuous gestures
    • Both generations had gesture controls
    • 1st gen. used heuristics
    • 2nd gen. uses Microsoft’s gesture recognizer

    View Slide

  19. @ArgesRic
    A BRIEF OVERVIEW OF THE
    INSTALLATIONS

    View Slide

  20. @ArgesRic
    FIRST GENERATION
    • One passer-by per Kinect, represented by an agent cloud
    • Users play with data (simple interaction)
    • Avatar made out of particles plucked from the stream
    • Avatar eventually dissolves and goes back to the stream
    • Pure avateering

    View Slide

  21. @ArgesRic
    AVATAR GENERATION

    View Slide

  22. @ArgesRic
    AVATAR GENERATION

    View Slide

  23. @ArgesRic

    View Slide

  24. @ArgesRic
    SECOND GENERATION
    • Six passer-bys per Kinect, can play with the background streams.
    • More game oriented - much more complex interaction.
    • Three mini-games: music jam, football, hang-glider.
    • Mixed avateering, animation, inverse kinematics

    View Slide

  25. @ArgesRic
    SECOND GENERATION
    • Six passer-bys per Kinect, can play wit the background streams.
    • More game oriented - much more complex interaction.
    • Three mini-games: music jam, football, hang-glider.
    • Mixed avateering, animation, inverse kinematics

    View Slide

  26. @ArgesRic
    AVATEERING CONSIDERATIONS

    View Slide

  27. @ArgesRic
    CONSIDERATIONS
    • Visual design affects what you get to do with a model.

    View Slide

  28. @ArgesRic
    CONSIDERATIONS

    View Slide

  29. @ArgesRic

    View Slide

  30. @ArgesRic
    CONSIDERATIONS
    • Visual design affects what you get to do with a model.
    • Angular, faceted characters are the last sort of avatar that you
    want.

    View Slide

  31. @ArgesRic
    CONSIDERATIONS

    View Slide

  32. @ArgesRic
    CONSIDERATIONS
    • Visual design affects what you get to do with a model.
    • Abstract, faceted characters are the last sort of avatar that you want.
    • You’ll need to deal with expectations, in particular, with how
    directly you can apply Kinect data.

    View Slide

  33. @ArgesRic
    LIMB ORIENTATION
    • Assumption: body orientation data is a series of transforms in a
    hierarchy.
    • Reality: it’s only the direction that the next joint is at.
    • Remember: what Kinect sees is a shadow.

    View Slide

  34. @ArgesRic
    CONSIDERATIONS
    • Visual design affects what you get to do with a model.
    • Angular, faceted characters are the last sort of avatar that you want.
    • You’ll need to deal with expectations, in particular, with how directly
    you can apply Kinect data.
    • You’ll trade avateering freedom for reasonable constraints.

    View Slide

  35. @ArgesRic

    View Slide

  36. @ArgesRic

    View Slide

  37. @ArgesRic

    View Slide

  38. @ArgesRic
    (STILL ON FREEDOM VS. AVATEERING)
    • Close to launch, we get a report that the upper body is fine, but legs
    are kicking around like crazy
    • Client wants users to be able to lift their legs and kick, so we can’t
    just lock them down

    View Slide

  39. @ArgesRic

    View Slide

  40. @ArgesRic
    CONSIDERATIONS
    • Visual design affects what you get to do with a model.
    • Angular, faceted characters are the last sort of avatar that you want.
    • You’ll need to deal with expectations, in particular, with how directly you can apply
    Kinect data.
    • You’ll trade avateering freedom for reasonable constraints.
    • The more like a person your avatar looks, the closer players will expect to be
    mimicked.

    View Slide

  41. @ArgesRic
    CONSIDERATIONS
    • Visual design affects what you get to do with a model.
    • Angular, faceted characters are the last sort of avatar that you want.
    • You’ll need to deal with expectations, in particular, with how directly you can apply Kinect
    data.
    • You’ll trade avateering freedom for reasonable constraints.
    • The more like a person your avatar looks, the closer players will expect to be mimicked.
    • The more like a person your avatar looks, the closer players will mimic it.

    View Slide

  42. @ArgesRic

    View Slide

  43. @ArgesRic
    CONCLUSIONS

    View Slide

  44. @ArgesRic
    CONCLUSIONS
    • If you need to do avateering, try to let designers do the tweaking by
    providing them with tools. Avatar design may interfere with this.
    • Gesture recognition works great.
    • Combining animation and avateering is a great way to convey an
    impression to your users.
    • You’ll need to do a lot of data massaging. Make sure you schedule for it.

    View Slide

  45. @ArgesRic
    QUESTIONS?

    View Slide

  46. @ArgesRic
    THANKS!
    https://numergent.com/talks/

    View Slide