Pro Yearly is on sale from $80 to $50! »

Responsive Web Applications

Responsive Web Applications

If you'd like to see the accompanying talk, head over to .


Klemen Slavič

December 14, 2011


  1. Responsive Web Applications Klemen Slavič

  2. Basically ...

  3. Information.

  4. We create it.

  5. We consume it.

  6. Across various media.

  7. It‘s interactive.

  8. It‘s haptic.

  9. It‘s responsive.

  10. It wasn‘t always like this, though.

  11. Traditionally, it was always bound to a medium.

  12. But along came computers.

  13. Information became coded, abstract.

  14. It became intangible.

  15. Reproduction was cheap, but lacking.

  16. Computers are catching up, though.

  17. But interaction was indirect, clunky.

  18. It still is.

  19. What makes it so inaccessible?

  20. We use all five senses to experience the world

    around us.
  21. Devices can cater to sight and sound.

  22. But there‘s a more important sense for interaction –

  23. Obviously, we‘re ignoring taste and smell. (But frankly, we‘re

    glad those aren‘t part of the experience.)
  24. Touch is direct.

  25. Interaction becomes natural.

  26. Not everything supports it, though.

  27. But why should we be creating baseline experiences for

  28. Use whatever is available at the time.

  29. Adapt. Evolve.

  30. ... but beware of convenience traps.

  31. A touch does not a pointer make.

  32. Even though a tap triggers a click, it doesn‘t mean

    we should treat it as such.
  33. • A finger obscures the content • A touch isn‘t

    as precise as a pointer • Touches wiggle, mouse pointers don‘t (easy to mistake for a drag) • A mouse has a single pointer, but you can have multiple touches • A pointer is persistent, a touch is not Need reasons? Fine:
  34. There are numerous pitfalls associated with assuming a mouse+keyboard paradigm.

  35. • Menus with hover-triggered content • Gallery widgets that pause

    on hover • Using mouse drag events to navigate, interact • Using double clicks • Using right clicks • ... Need we go on? Le Gránde Fail
  36. Hands up: has this ever happened to you?

  37. We need to think differently.

  38. Too soon?

  39. ... aaaaanyways, moving on.

  40. Select item Left click Tap Hand/plane intersection Voice command We

    need to abstract interaction.
  41. Abstraction helps our vocabulary.

  42. It avoids convenience traps by defining new events that are

    triggered by different primitive interactions.
  43. We no longer think in terms of „clicks“

  44. ... but in terms of actions within our environment.

  45. But why all the fuss?

  46. Two words:

  47. Touch is not the only emerging interface.

  48.,47642/ Kinect shows a lot of promise.

  49. It provides a way to physically place yourself within the

    interface, indirectly.
  50. Not near it (mouse+keyboard), not on it (touch), but within

  51. Oh, and you can talk to it. (There‘s a Web

    Audio API on the way.)
  52. Seriously.

  53. Think about it.

  54. That‘s why it‘s so important that we‘re able to express

    actions as a series of interface events, regardless of the interaction.
  55. After that, we‘re free to add support for any future

    interaction model.
  56. • Ultra- and infrasound projectors for remote haptic feedback •

    Immersive holography • Natural language interfaces • Non-invasive neural interfaces • THE ! Just to name a few:
  57. Okay.

  58. So we‘re not there just yet.

  59. We‘re still missing browser support for most of these.

  60. Let‘s make use of the features that are available.

  61. • User touches a point on the screen • User

    lifts the finger after no more than 300ms and doesn‘t move it by more than 10px • If above condition holds: – we trigger a tap event and forward it the coordinates and target element • Else: – we do nothing Let‘s define a touch event – tap
  62. • User touches a point on the screen • User

    moves the finger by more than 10px in any direction and lifts the finger • Determine the direction (N, S, E, W), trigger a swipe event and forward it the direction, initial coordinates and length of the swipe A swipe is similar:
  63. • Determine which person is interacting • Place a plain

    parallel to the user‘s abdomen at about 2/3 arm‘s length in front of them • Determine intersection point(s) of skeleton with the plane to determine „touch“ points • Trigger intersection events for each intersection on each animation frame • ...[more steps here]... • Profit Working with spatial data is trickier:
  64. • Use the browser‘s Speech Input API to determine a

    command • Use prescribed grammar to match words to commands • Trigger events based on the chosen command on the active element Voice interaction is simpler:
  65. We can set up any number of these events based

    on primitive events. Even gestures.
  66. Demo time!

  67. Go fork and conquer.