CVBAR 2013 Keynote http://imlab.jp/cbdar2013/index.shtml#keynote
Most applications in intelligent environments so far strongly rely on specific sensor combinations at predefined positions, orientations etc. While this might be acceptable for some application domains (e.g. industry), it hinders the wide adoption of pervasive computing. How can we extract high level information about human actions and complex real world situations from heterogeneous ensembles of simple, often unreliable sensors embedded in commodity devices?
This talk mostly focuses on how to use body-worn devices for activity recognition in general, and how to combine them with infrastructure sensing and computer vision approaches for a specific high level human activity, namely better understanding knowledge acquisition (e.g. recognizing reading activities).
We discuss how placement variations of electronic appliances carried by the user influence the possibility of using sensors integrated in those appliances for human activity recognition. I categorize possible variations into four classes: environmental placements, placement on different body parts (e.g. jacket pocket on the chest, vs. a hip holster vs. the trousers pocket), small displacement within a given coarse location (e.g. device shifting in a pocket), and different orientations.For each of these variations, I give an overview of our efforts to deal with them.
In the second part of the talk, we combine several pervasive sensing approaches (computer vision, motion-based activity recognition etc.) to tackle the problem of recognizing and classifying knowledge acquisition tasks with a special focus on reading. We discuss which sensing modalities can be used for digital and offline reading recognition, as well as how to combine them dynamically.