Multi-Kinect motion tracking and Iteractive Installations

Multi-Kinect motion tracking and Iteractive Installations

Delivered as a masterclass at Nucl.AI 2015 conference. I focused on ways to deal with erratic and noisy input signals, coordinating multiple Kinect sensors via RabbitMQ and offloading heavy processing to multiple computers.

The video currently available on the Nucl.AI archives for conference attendees and AIGameDev subscribers.

3671c3425bbbc1de94d87374617646b7?s=128

Ricardo J. Méndez

July 20, 2015
Tweet

Transcript

  1. 2.

    @ArgesRic WHO AM I? • Ricardo J. Méndez • Founder,

    Numergent • E-mail: ricardo@numergent.com • Twitter: @ArgesRic
  2. 3.

    @ArgesRic WHAT DO I DO? • Work with media agencies

    • Data visualization and interactive installations • Previously: • Data analysis for banking, healthcare • Game development
  3. 4.

    @ArgesRic TALK STRUCTURE • Show a bit about the installations

    • Give a high level overview • Talk about using multiple networked Kinects • Get into nitty gritty of avateering gotchas, technical and design
  4. 6.

    @ArgesRic INTERACTIVE INSTALLATIONS? • Gigantic, in-place, Kinect-powered mini-games • 8mx3m

    screens made out of MicroTiles • Promotional in nature, advertising oriented • Look cool, easy to understand, grab the user’s attention • Months of work for something players will experience in 30-60 seconds.
  5. 9.

    @ArgesRic HIGH LEVEL OVERVIEW • 3D application done in Unity

    4 • Multiple Kinects, 3 on the 1st gen, 2 on 2nd gen • Distributed team - New York, Hamburg, Berlin, Bucharest • 4,000 autonomous agents
  6. 10.

    @ArgesRic WHY SO MANY AGENTS? • Working with vague concepts

    like “the power of the network”. • Meant to represent data. • Flashes through the streams to represent communication. • Each agent acts independently to provide more visual variety.
  7. 11.

    @ArgesRic UNITYSTEER • MIT-Licensed • https://github.com/ricardojmendez/UnitySteer • Started as a

    port of Craig Reynolds’ OpenSteer • Each background particle is a UnitySteer agent making its own decisions and signaling to neighbors
  8. 14.

    @ArgesRic TECHNICAL LIMITATIONS • First gen. installation called for 3

    Kinects • … but the Kinect 1 SDK only supported two per machine. • Second gen. installation called for 2 Kinects • … but the Kinect 2 SDK supports only one per machine.
  9. 15.

    @ArgesRic SOLUTION: KINECT REMOTE • Kinect Remote: https://github.com/ricardojmendez/Kinect2Remote • MIT-licensed

    .Net Application and client library • Processes depth and body information, sends it in protobuf format to a RabbitMQ server
  10. 16.

    @ArgesRic KINECT REMOTE • Single package per data type per

    frame. • Multiple queues, one per data type. • Expires at 35ms - we can interpolate. • Allows for “Body Processors”, which see (and can tag) all data before it gets stuffed into a “Body Bag”.
  11. 17.

    @ArgesRic WHAT DO THE BODY PROCESSORS DO? • Passer-by detection

    • User selection • Deciding if to hold the user • Limb ambiguity • Joint velocity calculation • Runs gesture recognition
  12. 18.

    @ArgesRic GESTURE RECOGNITION • Discrete and continuous gestures • Both

    generations had gesture controls • 1st gen. used heuristics • 2nd gen. uses Microsoft’s gesture recognizer
  13. 20.

    @ArgesRic FIRST GENERATION • One passer-by per Kinect, represented by

    an agent cloud • Users play with data (simple interaction) • Avatar made out of particles plucked from the stream • Avatar eventually dissolves and goes back to the stream • Pure avateering
  14. 23.
  15. 24.

    @ArgesRic SECOND GENERATION • Six passer-bys per Kinect, can play

    with the background streams. • More game oriented - much more complex interaction. • Three mini-games: music jam, football, hang-glider. • Mixed avateering, animation, inverse kinematics
  16. 25.

    @ArgesRic SECOND GENERATION • Six passer-bys per Kinect, can play

    wit the background streams. • More game oriented - much more complex interaction. • Three mini-games: music jam, football, hang-glider. • Mixed avateering, animation, inverse kinematics
  17. 29.
  18. 30.

    @ArgesRic CONSIDERATIONS • Visual design affects what you get to

    do with a model. • Angular, faceted characters are the last sort of avatar that you want.
  19. 32.

    @ArgesRic CONSIDERATIONS • Visual design affects what you get to

    do with a model. • Abstract, faceted characters are the last sort of avatar that you want. • You’ll need to deal with expectations, in particular, with how directly you can apply Kinect data.
  20. 33.

    @ArgesRic LIMB ORIENTATION • Assumption: body orientation data is a

    series of transforms in a hierarchy. • Reality: it’s only the direction that the next joint is at. • Remember: what Kinect sees is a shadow.
  21. 34.

    @ArgesRic CONSIDERATIONS • Visual design affects what you get to

    do with a model. • Angular, faceted characters are the last sort of avatar that you want. • You’ll need to deal with expectations, in particular, with how directly you can apply Kinect data. • You’ll trade avateering freedom for reasonable constraints.
  22. 35.
  23. 36.
  24. 37.
  25. 38.

    @ArgesRic (STILL ON FREEDOM VS. AVATEERING) • Close to launch,

    we get a report that the upper body is fine, but legs are kicking around like crazy • Client wants users to be able to lift their legs and kick, so we can’t just lock them down
  26. 39.
  27. 40.

    @ArgesRic CONSIDERATIONS • Visual design affects what you get to

    do with a model. • Angular, faceted characters are the last sort of avatar that you want. • You’ll need to deal with expectations, in particular, with how directly you can apply Kinect data. • You’ll trade avateering freedom for reasonable constraints. • The more like a person your avatar looks, the closer players will expect to be mimicked.
  28. 41.

    @ArgesRic CONSIDERATIONS • Visual design affects what you get to

    do with a model. • Angular, faceted characters are the last sort of avatar that you want. • You’ll need to deal with expectations, in particular, with how directly you can apply Kinect data. • You’ll trade avateering freedom for reasonable constraints. • The more like a person your avatar looks, the closer players will expect to be mimicked. • The more like a person your avatar looks, the closer players will mimic it.
  29. 42.
  30. 44.

    @ArgesRic CONCLUSIONS • If you need to do avateering, try

    to let designers do the tweaking by providing them with tools. Avatar design may interfere with this. • Gesture recognition works great. • Combining animation and avateering is a great way to convey an impression to your users. • You’ll need to do a lot of data massaging. Make sure you schedule for it.