Slide 71
Slide 71 text
Publications covered by the thesis (*co-
fi
rst authors)
1. T. Ohkawa, R. Furuta, and Y. Sato.
E
ffi
cient annotation and learning for 3D hand pose estimation: A survey. IJCV, 2023
2. Z. Fan*, T. Ohkawa*, L. Yang*, ... (20 authors), A. Yao.
Benchmarks and challenges in pose estimation for egocentric hand interactions with objects. In ECCV, 2024
3. T. Ohkawa, J. Lee, S. Saito, J. Saragih, F. Prada, Y. Xu, S. Yu, R. Furuta, Y. Sato, and T. Shiratori.
Generative modeling of shape-dependent self-contact human poses. In ICCV, 2025
4. N. Lin*, T. Ohkawa*, M. Zhang, Y. Huang, R. Furuta, and Y. Sato.
SiMHand: Mining of similar hands for large-scale 3D hand pose pre-training. In ICLR, 2025
5. T. Ohkawa, Y.-J. Li, Q. Fu, R. Furuta, K. M. Kitani, and Y. Sato.
Domain adaptive hand keypoint and pixel localization in the wild. In ECCV, 2022
6. T. Ohkawa, T. Yagi, T. Nishimura, R. Furuta, A. Hashimoto, Y. Ushiku, and Y. Sato.
Exo2EgoDVC: Dense video captioning of egocentric procedural activities using web instructional videos. In WACV, 2025
aa
Related publications not covered by the thesis
A. T. Ohkawa, K. He, F. Sener, T. Hodan, L. Tran, and C. Keskin.
AssemblyHands: Towards egocentric activity understanding via 3D hand pose estimation. In CVPR, 2023
B. T. Banno, T. Ohkawa, R. Liu, R. Furuta, and Y. Sato.
AssemblyHands-X: 3D hand-body co-registration for understanding bi-manual human activities. In MIRU, 2025
C. R. Liu, T. Ohkawa, M. Zhang, and Y. Sato.
Single-to-dual-view adaptation for egocentric 3D hand pose estimation. In CVPR, 2024
D. T. Ohkawa, T. Yagi, A. Hashimoto, Y. Ushiku, and Y. Sato.
Foreground-aware stylization and consensus pseudo-labeling for domain adaptation of
fi
rst-person hand segmentation. IEEE
Access, 2021
aa
Awards / Fellowships
• CVPR EgoVis Distinguished Paper Award’25, Google PhD Fellowship’24, ETH Zurich Leading House Asia’23,
MSRA D-CORE’23, JSPS DC1’22, JST ACT-X (’20-’22, Accel.’23)
71