Unsupervised Feature Extraction Applied to Bioinformatics: A PCA based and TD based approach [Taguchi, 2019] ➢ 行列・テンソル分解によるヘテロバイオデータ統合解析の数理 第{1,2,3}回 [露崎, 2021-2022] ⚫ ネットワークデータ分析(無線通信,交通,ソーシャルネットワーク,文献,Web) ➢ Accurate Recovery of Internet Traffic Data: A Tensor Completion Approach [Xie+, 2016] ➢ HaTen2: Billion-scale Tensor Decomposition [Jeon+, 2015] ⚫ など… ID 001 ID 002 ID 003 ID 004
2009 [pdf] 講演 [youtube] Cichockiらのレビュー論文 Tensor Decompositions for Signal Processing Applications Cichocki+, IEEE SPM, 2015 [pdf] T. G. Kolda (MathSci.ai) Signal Processing Society Magazine Best Paper Award (ICASSPにて) A. Cichocki (Skoltech) L. De Lathauwer (KULeuven)
Learning Sidiropoulos+, IEEE TSP, 2017 [pdf] Cichockiらの書籍 Tensor Networks for Dimensionality Reduction and Large-Scale Optimization: Part 1 [link] , Part 2 [pdf] Cichocki+, Foundations and Trends in Machine Learning, 2016 [link] N. Sidiropoulos (Univ. of Virginia)
[link] 目次 1章 Tensor decompositions: Computations, applications, and challenges 2章 Transform-based tensor SVD in multidimensional image recovery 3章 Partensor 4章 A Riemannian approach to low-rank tensor learning 5章 Generalized thresholding for low-rank tensor recovery 6章 Tensor principal component analysis 7章 Tensors for deep learning theory 8章 Tensor network algorithms for image classification 9章 High-performance TD for compressing and accelerating DNN 10章 Coupled tensor decomposition for data fusion 11章 Tensor methods for low-level vision T. Yokota, CF. Caiafa, and Q. Zhao テンソル分解を用いた画像復元法についてまとめました 12章 Tensors for neuroimaging 13章 Tensor representation for remote sensing images 14章 Structured TT decomposition for speeding up kernel learning Qibin Zhao (RIKEN AIP) Cesar F. Caiafa (CONICET)
rank ⚫ Stegeman&Comon, Linear Algebra and its Applications, 433(7): 1276-1300, 2010. ⚫ 最良ランク1近似をして差分を取るとき,当然誤差は小さくなるが,必ずしもCPランクは縮退しない ⚫ 行列の場合は(エッカート・ヤングの定理より)切り捨て特異値分解との差分となるため必ず縮退する 最良ランク1近似 ⚫ ほぼ全ての実数テンソルにおいて最良ランク1近似は一意 ➢ 定理2, Friedland&Ottaviani, The Number of Singular Vector Tuples and Uniqueness of Best Rank- One Approximation of Tensors, Foundations of Computational Mathematics, 14:1209-1242, 2014. 最良ランク1近似 ランク3になってしまう ◦ = ー ランク2のテンソル > ◦ ー が最も小さくなるような ◦ のこと
[Kim+, 2016 (in ICLR)] ⚫ Supervised Learning with Tensor Networks [Stoudenmire+, 2016 (in NeurIPS)] ⚫ Exponential Machines [Nivikov+, 2017 (in ICLR workshop)] ⚫ Compact RNNs with Block-term Tensor Decomposition [Ye+, 2018 (in CVPR)] ⚫ Compressing RNNs with Tensor Ring [Pan+, 2019 (in AAAI)] ⚫ Tensor network decomposition of CNN kernels [Hayashi+, 2019 (in NeurIPS)] ⚫ For understanding DNNs [Li+, 2019 (in ICASSP)] [Li+, 2020 (in AISTATS)]