Pro Yearly is on sale from $80 to $50! »

Understanding Touch

Understanding Touch

Current touch devices, such as capacitive touchscreens are based on the implicit assumption that users acquire targets with the center of the contact area between finger and device. Findings from our previous work indicate, however, that such devices are subject to systematic error offsets. This suggests that the underlying assumption is most likely wrong. In this paper, we therefore revisit this assumption.

In a series of three user studies, we find evidence that the features that users align with the target are visual features. These features are located on the top of the user's fingers, not at the bottom, as assumed by traditional devices. We present the projected center model, under which error offsets drop to 1.6mm, compared to 4mm for the traditional model. This suggests that the new model is indeed a good approximation of how users conceptualize touch input.

The primary contribution of this paper is to help understand touch—one of the key input technologies in human-computer interaction. At the same time, our findings inform the design of future touch input technology. They explain the inaccuracy of traditional touch devices as a "parallax" artifact between user control based on the top of the finger and sensing based on the bottom side of the finger. We conclude that certain camera-based sensing technologies can inherently be more accurate than contact area-based sensing.

More information on http://www.christianholz.net/understanding_touch.html

1b1d5420dfff84bf061b2cc53e9b839b?s=128

Christian Holz

May 11, 2011
Tweet

Transcript

  1. understanding touch Christian Holz Patrick Baudisch Master’s Thesis Selection and

    Querying Techniques for Time-Series Graphs
  2. is this user pointing at the center of the crosshairs?

  3. we are wondering: how did you know?

  4. maybe the finger is casting a ray...

  5. is this user targeting the center of the crosshairs?

  6. again, we are wondering? how did you know? what features

    did you look for? what logical steps took place in your head...
  7. again, we are wondering? how did you know? what features

    did you look for? what logical steps took place in your head... this is what this talk is about...
  8. 2

  9. 2D 6D

  10. 2D 6D user

  11. 2D user 6D device

  12. if and only if the device inverts the user this

    works
  13. 2D user 6D device

  14. so how do users map 2D to 6D? what happens

    in users’ heads?
  15. ?

  16. >1 billion (capacitive) touch devices are based on an implicit

    assumption...
  17. 2D 6D contact area device

  18. it does not seem so...

  19. [Holz & Baudisch CHI ’10]

  20. [Holz & Baudisch CHI ’10]

  21. how to determine the user’s mental model? (so we can

    invert it to eliminate error offsets)
  22. ?

  23. 3 what might it be... preview

  24. None
  25. 4 approach

  26. measure points and determine fit of a model using an

    unambiguous device (e.g. mouse):
  27. but what shall we measure? there are infinite ways how

    users might have mapped these crosshairs to 6D
  28. we need to get to a point where models are

    countable
  29. we had to revert to basic experimental process

  30. guess a model try it out in an experiment if

    outcome is bad, repeat
  31. bad model large error offsets good model small error offsets

  32. 6 let’s ask users

  33. interviews

  34. most frequent answer: 26 of 30 said that they placed

    the center of the contact area over the target
  35. [Holz & Baudisch CHI ’10] but we already knew this

    is not it!
  36. users are known to guess and retrofit explanations. users cannot

    see the contact area.
  37. target basket swoosh... a b

  38. we moved on to the second most-frequently listed feature: 13

    of 30 mentioned visual control
  39. but from what perspective? between user’s eyes side of the

    touch surface camera above the touch surface visual
  40. but from what perspective? between user’s eyes side of the

    touch surface camera above the touch surface visual
  41. finally, we could now enumerate models...

  42. 7 enumerating visual models

  43. None
  44. None
  45. None
  46. None
  47. None
  48. None
  49. None
  50. None
  51. None
  52. None
  53. None
  54. None
  55. None
  56. 7x7

  57. bad model large error offsets good model small error offsets

  58. 8 we ran 3 studies

  59. None
  60. 15° 25° 45° 65° -15° 0° 15° 45° 90° x

    x x x x x x x x x x x we varied... pitch roll
  61. ...and head position

  62. 6 combinations of finger angles (pitch, roll) × 4 head

    positions × 2 blocks × 4 repetitions = 192 trials / participant 30 + 12 + 12 participants design
  63. 23800 features 3400 images

  64. results

  65. None
  66. None
  67. None
  68. None
  69. None
  70. None
  71. None
  72. None
  73. projected center model

  74. projected center model

  75. projected center model

  76. projected center model

  77. None
  78. None
  79. None
  80. remaining error offsets projected center contact area 1.6mm 4mm

  81. remaining error offsets projected center contact area 1.6mm 4mm

  82. conclusions

  83. massive error from contact area but disappear with projected center

    model
  84. massive error from contact area but disappear with projected center

    model ! more likely to explain what happens in users’ heads
  85. this suggests one possible explanation for the inaccuracy of the

    contact area model...
  86. None
  87. current devices sense features at the bottom of finger

  88. user targets using features on top of finger current devices

    sense features at the bottom of finger
  89. parallax user targets using features on top of finger current

    devices sense features at the bottom of finger
  90. 9 implications on engineering

  91. when constructing future touch devices we essentially have two choices:

  92. stick with tracking based on center of contact area (e.g.,

    capacitive, FTIR) this can never be accurate ! be ready to apply corrective offsets
  93. projected center model

  94. C-Slate [Izadi et al. Tabletop ’07] LucidTouch [Wigdor et al.

    UIST ’07]
  95. Master’s Thesis Selection and Querying Techniques for Time-Series Graphs thanks

    http://www.christianholz.net http://www.patrickbaudisch.com
  96. 2D user 6D device

  97. [Holz&Baudisch CHI’10]

  98. [Holz&Baudisch CHI’10]

  99. [Holz&Baudisch CHI’10]

  100. [Holz&Baudisch CHI’10]