Slide 1

Slide 1 text

Recognition of Document Types Using Mobile Eye Tracking I know what you are reading Kai Kunze, Andreas Bulling, Yuzuko Utsumi, Yuki Shiga, Koichi Kise Osaka Prefecture University Max Plank Institute Saarbrücken

Slide 2

Slide 2 text

Kai Kunze - Recognition of Document Types Using Mobile Eye Tracking Motivation Quantified approach to reading (knowledge acquisition) People who read more higher vocabulary skill higher general knowledge [1] If you give quantified feedback people can improve their habits similar to apps/devices that track fitness and health they have been shown to improve physical fitness Very Few In-Situ Studies [1] A. Cunningham and K. Stanovich. What reading does for the mind. Journal of Direct Instruction, 1(2):137–149, 2001. [2] A. Bulling, J. A. Ward, and H. Gellersen. Multimodal Recognition of Reading Activity in Transit Using Body-Worn Sensors. ACM Trans. on Applied Perception Tracking Reading Habits: How much do you read? How fast? How often? What do you read? How much do you understand? “Can I copy the habits of my thesis advisor to become a better researcher?”

Slide 3

Slide 3 text

Kai Kunze - Recognition of Document Types Using Mobile Eye Tracking Tracking Reading Habits How much do you read? How fast? How often? What do you read? How much do you understand? 3

Slide 4

Slide 4 text

Kai Kunze - Recognition of Document Types Using Mobile Eye Tracking Tracking Reading Habits How much do you read? How fast? How often? What do you read? How much do you understand? 4 K. Kunze, H. Kawaichi, K. Yoshimura, K. Kise. The Wordometer – Estimating the Number of Words Read Using Document Image Retrieval and Mobile Eye Tracking. Published at ICDAR 2013, Washington D.C. Best Paper K. Kunze, H. Kawaichi, K. Yoshimura, K. Kise Towards inferring language expertise using eye tracking published at Work In Progress CHI, Paris, 2013.

Slide 5

Slide 5 text

Kai Kunze - Recognition of Document Types Using Mobile Eye Tracking Tracking Reading Habits How much do you read? How fast? How often? What do you read? How much do you understand? 5 K. Kunze, H. Kawaichi, K. Yoshimura, K. Kise. The Wordometer – Estimating the Number of Words Read Using Document Image Retrieval and Mobile Eye Tracking. Published at ICDAR 2013, Washington D.C. Best Paper K. Kunze, H. Kawaichi, K. Yoshimura, K. Kise Towards inferring language expertise using eye tracking published at Work In Progress CHI, Paris, 2013.

Slide 6

Slide 6 text

Kai Kunze - Recognition of Document Types Using Mobile Eye Tracking Document Classification Differences in amount and layout of text, images Focus on Japanese documents reading in native tounge Reading directions: Yokogaki/Tategaki also in other parts of Asia China, Korea etc. 6 Yokogaki Tategaki

Slide 7

Slide 7 text

Kai Kunze - Recognition of Document Types Using Mobile Eye Tracking Using Eye Tracking Visual behavior is subject to a lot of influences task, document type, environment etc. Which features are helpful to distinguish document types? Sliding window approach (over 200 fixations and saccades) to calculate features 7

Slide 8

Slide 8 text

Kai Kunze - Recognition of Document Types Using Mobile Eye Tracking Features - Saccade Direction Count 8 direction counts for 4 different direction: a ) 335◦ and 25◦ b ) 65◦ and 115◦, c ) 155◦ and 205◦ d ) 245◦ and 295◦ info about the main reading direction, layout of text/images a b c d 0 0 0 0 1 1 1

Slide 9

Slide 9 text

Kai Kunze - Recognition of Document Types Using Mobile Eye Tracking Features - Saccade Direction Mean/Variance calculate the saccade direction “angle” of the saccade mean, variance as features also related to the layout / amount of text and images 9

Slide 10

Slide 10 text

Kai Kunze - Recognition of Document Types Using Mobile Eye Tracking Features - Quantile Distance and Slope 10 5% to 95% quantile distance some information about the average page size slope general reading direction

Slide 11

Slide 11 text

Kai Kunze - Recognition of Document Types Using Mobile Eye Tracking Experimental Setup 8 participants (4 male/female, age 21 -34 ) 10 min of reading using SMI mobile eye-tracker 30Hz binocular ( saccades up to 33 ms) 5 document types: novel, manga, fashion magazine, newspaper, text book 5 locations: office, coffee shop, home setting, library, lecture hall Latin Square Method assignment of document type, location and starting position 11

Slide 12

Slide 12 text

Kai Kunze - Recognition of Document Types Using Mobile Eye Tracking Natural Reading newspaper in office 12

Slide 13

Slide 13 text

Examples

Slide 14

Slide 14 text

Kai Kunze - Recognition of Document Types Using Mobile Eye Tracking Results user dependent: 99 % user independent (leave out one): 74 % frame by frame 90 % majority voting 14

Slide 15

Slide 15 text

Kai Kunze - Recognition of Document Types Using Mobile Eye Tracking Magazine and Textbook Miss-Classification The number of horizontal saccades for the “Textbook” class is higher than for “Magazine”. This holds also for data from Participant 8. Yet, all of the “Magazine” feature points of P8 are in the “Textbook” portion of the other participants. 15 a b c d

Slide 16

Slide 16 text

Kai Kunze - Recognition of Document Types Using Mobile Eye Tracking Conclusions how generalizable is it for other languages? we added a science paper (double column) for all 4 users no change in performance As long as the text and image layout significantly vary, our method should work. Can we quantify the amount of pictures vs. text? 16

Slide 17

Slide 17 text

Kai Kunze - Recognition of Document Types Using Mobile Eye Tracking Future Work? 17 read.it words read word count Manga Science Papers Concentrated Reading 20 pages 15 pages Japanese Overview 30 min

Slide 18

Slide 18 text

Kai Kunze - Recognition of Document Types Using Mobile Eye Tracking questions, remarks, violent dissent? http://kaikunze.de twitter: @k_garten facebook: kai.kunze app.net: @kkai [email protected] https://github.com/kkai/ 18 shameless advertisement: Augmented Human 2014, Kobe http://bit.ly/augmented2014 Paper Deadline: Jan 11, 2014 Conference: March 7-9, 2014