Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Responsive Web Applications

Responsive Web Applications

If you'd like to see the accompanying talk, head over to http://video.kiberpipa.org/SU_Klemen_Slavic-Responsive_web_applications/ .

Klemen Slavič

December 14, 2011

More Decks by Klemen Slavič

Other Decks in Programming


  1. • A finger obscures the content • A touch isn‘t

    as precise as a pointer • Touches wiggle, mouse pointers don‘t (easy to mistake for a drag) • A mouse has a single pointer, but you can have multiple touches • A pointer is persistent, a touch is not Need reasons? Fine:
  2. • Menus with hover-triggered content • Gallery widgets that pause

    on hover • Using mouse drag events to navigate, interact • Using double clicks • Using right clicks • ... Need we go on? Le Gránde Fail
  3. It avoids convenience traps by defining new events that are

    triggered by different primitive interactions.
  4. That‘s why it‘s so important that we‘re able to express

    actions as a series of interface events, regardless of the interaction.
  5. • Ultra- and infrasound projectors for remote haptic feedback •

    Immersive holography • Natural language interfaces • Non-invasive neural interfaces • THE ! Just to name a few:
  6. • User touches a point on the screen • User

    lifts the finger after no more than 300ms and doesn‘t move it by more than 10px • If above condition holds: – we trigger a tap event and forward it the coordinates and target element • Else: – we do nothing Let‘s define a touch event – tap
  7. • User touches a point on the screen • User

    moves the finger by more than 10px in any direction and lifts the finger • Determine the direction (N, S, E, W), trigger a swipe event and forward it the direction, initial coordinates and length of the swipe A swipe is similar:
  8. • Determine which person is interacting • Place a plain

    parallel to the user‘s abdomen at about 2/3 arm‘s length in front of them • Determine intersection point(s) of skeleton with the plane to determine „touch“ points • Trigger intersection events for each intersection on each animation frame • ...[more steps here]... • Profit Working with spatial data is trickier:
  9. • Use the browser‘s Speech Input API to determine a

    command • Use prescribed grammar to match words to commands • Trigger events based on the chosen command on the active element Voice interaction is simpler:
  10. We can set up any number of these events based

    on primitive events. Even gestures.