Upgrade to Pro — share decks privately, control downloads, hide ads and more …

[IJCAI-ECAI 2022] Evaluation Methods for Representation Learning: A Survey

[IJCAI-ECAI 2022] Evaluation Methods for Representation Learning: A Survey

The slides used for IJCAI-ECAI 2022 survey track presentation.

Extended version: https://arxiv.org/abs/2204.08226

Kento Nozawa

June 15, 2022
Tweet

More Decks by Kento Nozawa

Other Decks in Research

Transcript

  1. Evaluation Methods for Representation Learning: A Survey Kento Nozawa1,2 Issei

    Sato1 1: The University of Tokyo 2: RIKEN AIP The extended survey is available!
  2. Representation learning [Bengio et al., 2013] 2 Goal: learning a

    feature extractor that can extract generic representations 
 automatically from a dataset. • Feature extractor: , e.g., deep neural nets. • Representation: a real-valued vector . • Generic representation isn’t well-de fi ned. f f : 𝒳 → ℝd f(x) ∈ ℝd f( ⋅ ) Extracted representation f(x) ∈ ℝd Input x 10.1 −0.3 ⋮ 1.7
  3. Motivation of this survey 3 • Thanks to the fl

    exibility of representation learning, we have various ways to evaluate representation learning algorithms in practice. • Evaluation methods are critical part when we design a novel algorithm or analyse existing algorithms. • We organise the existing evaluation methods into four categories.
  4. Problem setting 4 Given several pre-trained feature extractors , we

    determine the best extractor in them. { ̂ f i } … … … Supervised ResNet [He et al., 2016] Self-supervised ViT [Dosovitskiy et al., 2021]
  5. Evaluation method 1: Pre-training 5 • Feature extractor can be

    seen as a pre-trained model [Hinton et al., 2006] to solve a task. • We evaluate extractors with the metric of the new task, such as validation accuracy. • There are two common training strategies on the new task: • Frozen: train only a simple predictor on extracted features by . • Fine-tune: train too on the new task. • Note: Model and data sizes of pre-training correlate with the performance (Scaling-law [Kaplan et al., 2020]). ̂ f ̂ f ̂ f
  6. Evaluation method 2: Regularisation 6 • Feature extractor works as

    a regulariser of the model for the new task not to forget pre-trained knowledge. • We evaluate extractors with the metric of the new task, such as validation accuracy. • Note: this requires additional 
 memory space for regularisation. Pre-trained model Model for a new task Make similar Loss function
  7. Evaluation method 3: Dimensionality reduction 7 • Representation learning can

    be seen as dimensionality reduction when the extracted feature’s dimensionality is smaller than the original one. • Evaluation: how perform well as dimensionality reduction. • Related evaluation: visualisation (directly learn 2d features or apply T-SNE [van der Maaten and Hinton, 2008]). • Evaluation: compare the scatter plots visually. Classes 0 1 2 3 4 5 6 7 8 9
  8. Evaluation method 4: as auxiliary task 8 • Given several

    representation learning algorithms, we would like to select one of them that improves main task’s performance. • Ex: cross-entropy loss for image classi fi cation (main task) and self- supervised contrastive loss (auxiliary task). • We evaluate them with the metric of the main task, ex. validation accuracy. • Note: unlike the other methods, we can directly search the hyper-parameters of representation learning part. Main task’s loss auxiliary loss
  9. Conclusion 9 • We organised the existing evaluation methods for

    representation learning: 1. Pre-training 2. Regularisation 3. Dimensionality reduction (and visualisation) 4. Auxiliary task • Discussion: • Existing evaluation methods focus on a single task’s performance. • Can we develop an evaluation method for universal representation learning? More discussion and theoretical survey are available in the extended version!