Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Multi-Kinect motion tracking and Iteractive Ins...

Multi-Kinect motion tracking and Iteractive Installations

Delivered as a masterclass at Nucl.AI 2015 conference. I focused on ways to deal with erratic and noisy input signals, coordinating multiple Kinect sensors via RabbitMQ and offloading heavy processing to multiple computers.

The video currently available on the Nucl.AI archives for conference attendees and AIGameDev subscribers.

Ricardo J. Méndez

July 20, 2015
Tweet

More Decks by Ricardo J. Méndez

Other Decks in Technology

Transcript

  1. @ArgesRic WHAT DO I DO? • Work with media agencies

    • Data visualization and interactive installations • Previously: • Data analysis for banking, healthcare • Game development
  2. @ArgesRic TALK STRUCTURE • Show a bit about the installations

    • Give a high level overview • Talk about using multiple networked Kinects • Get into nitty gritty of avateering gotchas, technical and design
  3. @ArgesRic INTERACTIVE INSTALLATIONS? • Gigantic, in-place, Kinect-powered mini-games • 8mx3m

    screens made out of MicroTiles • Promotional in nature, advertising oriented • Look cool, easy to understand, grab the user’s attention • Months of work for something players will experience in 30-60 seconds.
  4. @ArgesRic HIGH LEVEL OVERVIEW • 3D application done in Unity

    4 • Multiple Kinects, 3 on the 1st gen, 2 on 2nd gen • Distributed team - New York, Hamburg, Berlin, Bucharest • 4,000 autonomous agents
  5. @ArgesRic WHY SO MANY AGENTS? • Working with vague concepts

    like “the power of the network”. • Meant to represent data. • Flashes through the streams to represent communication. • Each agent acts independently to provide more visual variety.
  6. @ArgesRic UNITYSTEER • MIT-Licensed • https://github.com/ricardojmendez/UnitySteer • Started as a

    port of Craig Reynolds’ OpenSteer • Each background particle is a UnitySteer agent making its own decisions and signaling to neighbors
  7. @ArgesRic TECHNICAL LIMITATIONS • First gen. installation called for 3

    Kinects • … but the Kinect 1 SDK only supported two per machine. • Second gen. installation called for 2 Kinects • … but the Kinect 2 SDK supports only one per machine.
  8. @ArgesRic SOLUTION: KINECT REMOTE • Kinect Remote: https://github.com/ricardojmendez/Kinect2Remote • MIT-licensed

    .Net Application and client library • Processes depth and body information, sends it in protobuf format to a RabbitMQ server
  9. @ArgesRic KINECT REMOTE • Single package per data type per

    frame. • Multiple queues, one per data type. • Expires at 35ms - we can interpolate. • Allows for “Body Processors”, which see (and can tag) all data before it gets stuffed into a “Body Bag”.
  10. @ArgesRic WHAT DO THE BODY PROCESSORS DO? • Passer-by detection

    • User selection • Deciding if to hold the user • Limb ambiguity • Joint velocity calculation • Runs gesture recognition
  11. @ArgesRic GESTURE RECOGNITION • Discrete and continuous gestures • Both

    generations had gesture controls • 1st gen. used heuristics • 2nd gen. uses Microsoft’s gesture recognizer
  12. @ArgesRic FIRST GENERATION • One passer-by per Kinect, represented by

    an agent cloud • Users play with data (simple interaction) • Avatar made out of particles plucked from the stream • Avatar eventually dissolves and goes back to the stream • Pure avateering
  13. @ArgesRic SECOND GENERATION • Six passer-bys per Kinect, can play

    with the background streams. • More game oriented - much more complex interaction. • Three mini-games: music jam, football, hang-glider. • Mixed avateering, animation, inverse kinematics
  14. @ArgesRic SECOND GENERATION • Six passer-bys per Kinect, can play

    wit the background streams. • More game oriented - much more complex interaction. • Three mini-games: music jam, football, hang-glider. • Mixed avateering, animation, inverse kinematics
  15. @ArgesRic CONSIDERATIONS • Visual design affects what you get to

    do with a model. • Angular, faceted characters are the last sort of avatar that you want.
  16. @ArgesRic CONSIDERATIONS • Visual design affects what you get to

    do with a model. • Abstract, faceted characters are the last sort of avatar that you want. • You’ll need to deal with expectations, in particular, with how directly you can apply Kinect data.
  17. @ArgesRic LIMB ORIENTATION • Assumption: body orientation data is a

    series of transforms in a hierarchy. • Reality: it’s only the direction that the next joint is at. • Remember: what Kinect sees is a shadow.
  18. @ArgesRic CONSIDERATIONS • Visual design affects what you get to

    do with a model. • Angular, faceted characters are the last sort of avatar that you want. • You’ll need to deal with expectations, in particular, with how directly you can apply Kinect data. • You’ll trade avateering freedom for reasonable constraints.
  19. @ArgesRic (STILL ON FREEDOM VS. AVATEERING) • Close to launch,

    we get a report that the upper body is fine, but legs are kicking around like crazy • Client wants users to be able to lift their legs and kick, so we can’t just lock them down
  20. @ArgesRic CONSIDERATIONS • Visual design affects what you get to

    do with a model. • Angular, faceted characters are the last sort of avatar that you want. • You’ll need to deal with expectations, in particular, with how directly you can apply Kinect data. • You’ll trade avateering freedom for reasonable constraints. • The more like a person your avatar looks, the closer players will expect to be mimicked.
  21. @ArgesRic CONSIDERATIONS • Visual design affects what you get to

    do with a model. • Angular, faceted characters are the last sort of avatar that you want. • You’ll need to deal with expectations, in particular, with how directly you can apply Kinect data. • You’ll trade avateering freedom for reasonable constraints. • The more like a person your avatar looks, the closer players will expect to be mimicked. • The more like a person your avatar looks, the closer players will mimic it.
  22. @ArgesRic CONCLUSIONS • If you need to do avateering, try

    to let designers do the tweaking by providing them with tools. Avatar design may interfere with this. • Gesture recognition works great. • Combining animation and avateering is a great way to convey an impression to your users. • You’ll need to do a lot of data massaging. Make sure you schedule for it.