Slide 1

Slide 1 text

Responsive Web Applications Klemen Slavič http://about.me/klemen.slavic

Slide 2

Slide 2 text

Basically ...

Slide 3

Slide 3 text

http://www.flickr.com/photos/heathbrandon/3187207970/ Information.

Slide 4

Slide 4 text

http://www.flickr.com/photos/deathtogutenberg/361664198/ We create it.

Slide 5

Slide 5 text

http://www.flickr.com/photos/butchershopglasgow/5594308079/ We consume it.

Slide 6

Slide 6 text

Across various media. http://www.flickr.com/photos/entropy1138/5282203331/

Slide 7

Slide 7 text

It‘s interactive. http://www.flickr.com/photos/jbergen/1271176832/

Slide 8

Slide 8 text

http://www.flickr.com/photos/mc/1602236114/ It‘s haptic.

Slide 9

Slide 9 text

http://www.flickr.com/photos/just-an-idea/3750094/ It‘s responsive.

Slide 10

Slide 10 text

http://hashemartworks.net/wp-content/uploads/2008/11/tablet-1-frontback.jpg It wasn‘t always like this, though.

Slide 11

Slide 11 text

http://www.flickr.com/photos/wijkerslooth/4251533995/ Traditionally, it was always bound to a medium.

Slide 12

Slide 12 text

http://www.flickr.com/photos/netphotography/2360185459/ But along came computers.

Slide 13

Slide 13 text

http://www.flickr.com/photos/cstmweb/4348489567/ Information became coded, abstract.

Slide 14

Slide 14 text

http://www.flickr.com/photos/bradandkathy/2404564885/ It became intangible.

Slide 15

Slide 15 text

http://www.flickr.com/photos/jvk/1458010/ Reproduction was cheap, but lacking.

Slide 16

Slide 16 text

http://forums.cgsociety.org/showthread.php?f=121&t=399499 Computers are catching up, though.

Slide 17

Slide 17 text

http://www.flickr.com/photos/holmgren/1375283440/ But interaction was indirect, clunky.

Slide 18

Slide 18 text

http://www.flickr.com/photos/34575118@N04/4240822343/ It still is.

Slide 19

Slide 19 text

http://www.flickr.com/photos/steffenj/847762820/ What makes it so inaccessible?

Slide 20

Slide 20 text

http://www.flickr.com/photos/yours-intuitively/4063292553/ We use all five senses to experience the world around us.

Slide 21

Slide 21 text

http://www.flickr.com/photos/johnencinas/3928235670/ Devices can cater to sight and sound.

Slide 22

Slide 22 text

http://www.flickr.com/photos/mrehan00/4530022593/ But there‘s a more important sense for interaction – touch.

Slide 23

Slide 23 text

http://www.flickr.com/photos/sehdeva/3426904301/ Obviously, we‘re ignoring taste and smell. (But frankly, we‘re glad those aren‘t part of the experience.)

Slide 24

Slide 24 text

http://www.flickr.com/photos/monkeyballs/2197694839/ Touch is direct.

Slide 25

Slide 25 text

http://www.flickr.com/photos/dcmetroblogger/4915501829/ Interaction becomes natural.

Slide 26

Slide 26 text

http://www.flickr.com/photos/nasteeca/4395540224/ Not everything supports it, though.

Slide 27

Slide 27 text

http://www.flickr.com/photos/wookieebyte/2477314947/ But why should we be creating baseline experiences for everyone?

Slide 28

Slide 28 text

http://www.neatorama.com/2010/06/11/the-macgyver-fact-check/ Use whatever is available at the time.

Slide 29

Slide 29 text

http://thewoodwhisperer.com/evolution-of-a-workshop/ Adapt. Evolve.

Slide 30

Slide 30 text

http://thewoodwhisperer.com/evolution-of-a-workshop/ ... but beware of convenience traps.

Slide 31

Slide 31 text

http://www.flickr.com/photos/jacobpellegren/383757210/ A touch does not a pointer make.

Slide 32

Slide 32 text

Even though a tap triggers a click, it doesn‘t mean we should treat it as such.

Slide 33

Slide 33 text

• A finger obscures the content • A touch isn‘t as precise as a pointer • Touches wiggle, mouse pointers don‘t (easy to mistake for a drag) • A mouse has a single pointer, but you can have multiple touches • A pointer is persistent, a touch is not Need reasons? Fine:

Slide 34

Slide 34 text

There are numerous pitfalls associated with assuming a mouse+keyboard paradigm.

Slide 35

Slide 35 text

• Menus with hover-triggered content • Gallery widgets that pause on hover • Using mouse drag events to navigate, interact • Using double clicks • Using right clicks • ... Need we go on? Le Gránde Fail

Slide 36

Slide 36 text

http://www.flickr.com/photos/naiffer/2691471212/ Hands up: has this ever happened to you?

Slide 37

Slide 37 text

http://artoftrolling.memebase.com/2011/10/05/sweet-prince-troll-ubuntu-will-miss-you/ We need to think differently.

Slide 38

Slide 38 text

Too soon?

Slide 39

Slide 39 text

... aaaaanyways, moving on.

Slide 40

Slide 40 text

Select item Left click Tap Hand/plane intersection Voice command We need to abstract interaction.

Slide 41

Slide 41 text

http://www.inquirer.net/remembering-steve-jobs Abstraction helps our vocabulary.

Slide 42

Slide 42 text

It avoids convenience traps by defining new events that are triggered by different primitive interactions.

Slide 43

Slide 43 text

http://www.flickr.com/photos/mardis/4100051854/ We no longer think in terms of „clicks“

Slide 44

Slide 44 text

http://www.flickr.com/photos/34228744@N07/3186833195/ ... but in terms of actions within our environment.

Slide 45

Slide 45 text

http://www.paintinghere.com/canvas/why_so_serious_the_joker_28214.html But why all the fuss?

Slide 46

Slide 46 text

http://futurefriend.ly Two words:

Slide 47

Slide 47 text

Touch is not the only emerging interface.

Slide 48

Slide 48 text

http://www.avclub.com/articles/kinect,47642/ Kinect shows a lot of promise.

Slide 49

Slide 49 text

It provides a way to physically place yourself within the interface, indirectly.

Slide 50

Slide 50 text

Not near it (mouse+keyboard), not on it (touch), but within it.

Slide 51

Slide 51 text

Oh, and you can talk to it. (There‘s a Web Audio API on the way.)

Slide 52

Slide 52 text

Seriously.

Slide 53

Slide 53 text

Think about it.

Slide 54

Slide 54 text

That‘s why it‘s so important that we‘re able to express actions as a series of interface events, regardless of the interaction.

Slide 55

Slide 55 text

After that, we‘re free to add support for any future interaction model.

Slide 56

Slide 56 text

• Ultra- and infrasound projectors for remote haptic feedback • Immersive holography • Natural language interfaces • Non-invasive neural interfaces • THE ! Just to name a few:

Slide 57

Slide 57 text

Okay.

Slide 58

Slide 58 text

So we‘re not there just yet.

Slide 59

Slide 59 text

We‘re still missing browser support for most of these.

Slide 60

Slide 60 text

Let‘s make use of the features that are available.

Slide 61

Slide 61 text

• User touches a point on the screen • User lifts the finger after no more than 300ms and doesn‘t move it by more than 10px • If above condition holds: – we trigger a tap event and forward it the coordinates and target element • Else: – we do nothing Let‘s define a touch event – tap

Slide 62

Slide 62 text

• User touches a point on the screen • User moves the finger by more than 10px in any direction and lifts the finger • Determine the direction (N, S, E, W), trigger a swipe event and forward it the direction, initial coordinates and length of the swipe A swipe is similar:

Slide 63

Slide 63 text

• Determine which person is interacting • Place a plain parallel to the user‘s abdomen at about 2/3 arm‘s length in front of them • Determine intersection point(s) of skeleton with the plane to determine „touch“ points • Trigger intersection events for each intersection on each animation frame • ...[more steps here]... • Profit Working with spatial data is trickier:

Slide 64

Slide 64 text

• Use the browser‘s Speech Input API to determine a command • Use prescribed grammar to match words to commands • Trigger events based on the chosen command on the active element Voice interaction is simpler:

Slide 65

Slide 65 text

We can set up any number of these events based on primitive events. Even gestures.

Slide 66

Slide 66 text

Demo time!

Slide 67

Slide 67 text

Go fork and conquer. http://github.com/krofdrakula/i-o-hu