Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Understanding Touch

Understanding Touch

Current touch devices, such as capacitive touchscreens are based on the implicit assumption that users acquire targets with the center of the contact area between finger and device. Findings from our previous work indicate, however, that such devices are subject to systematic error offsets. This suggests that the underlying assumption is most likely wrong. In this paper, we therefore revisit this assumption.

In a series of three user studies, we find evidence that the features that users align with the target are visual features. These features are located on the top of the user's fingers, not at the bottom, as assumed by traditional devices. We present the projected center model, under which error offsets drop to 1.6mm, compared to 4mm for the traditional model. This suggests that the new model is indeed a good approximation of how users conceptualize touch input.

The primary contribution of this paper is to help understand touch—one of the key input technologies in human-computer interaction. At the same time, our findings inform the design of future touch input technology. They explain the inaccuracy of traditional touch devices as a "parallax" artifact between user control based on the top of the finger and sensing based on the bottom side of the finger. We conclude that certain camera-based sensing technologies can inherently be more accurate than contact area-based sensing.

More information on http://www.christianholz.net/understanding_touch.html

Christian Holz

May 11, 2011
Tweet

More Decks by Christian Holz

Other Decks in Research

Transcript

  1. again, we are wondering? how did you know? what features

    did you look for? what logical steps took place in your head...
  2. again, we are wondering? how did you know? what features

    did you look for? what logical steps took place in your head... this is what this talk is about...
  3. 2

  4. ?

  5. how to determine the user’s mental model? (so we can

    invert it to eliminate error offsets)
  6. ?

  7. measure points and determine fit of a model using an

    unambiguous device (e.g. mouse):
  8. but what shall we measure? there are infinite ways how

    users might have mapped these crosshairs to 6D
  9. most frequent answer: 26 of 30 said that they placed

    the center of the contact area over the target
  10. but from what perspective? between user’s eyes side of the

    touch surface camera above the touch surface visual
  11. but from what perspective? between user’s eyes side of the

    touch surface camera above the touch surface visual
  12. 7x7

  13. 15° 25° 45° 65° -15° 0° 15° 45° 90° x

    x x x x x x x x x x x we varied... pitch roll
  14. 6 combinations of finger angles (pitch, roll) × 4 head

    positions × 2 blocks × 4 repetitions = 192 trials / participant 30 + 12 + 12 participants design
  15. massive error from contact area but disappear with projected center

    model ! more likely to explain what happens in users’ heads
  16. user targets using features on top of finger current devices

    sense features at the bottom of finger
  17. parallax user targets using features on top of finger current

    devices sense features at the bottom of finger
  18. stick with tracking based on center of contact area (e.g.,

    capacitive, FTIR) this can never be accurate ! be ready to apply corrective offsets
  19. Master’s Thesis Selection and Querying Techniques for Time-Series Graphs thanks

    http://www.christianholz.net http://www.patrickbaudisch.com