WeiYa's Work Yard

A dog, who fell into the ocean of statistics, tries to write down his ideas and notes to save himself.

Exploring DNN via Layer-Peeled Model

Posted on 0 Comments
Tags: Deep Learning, Imbalanced Classification

This note is for Fang, C., He, H., Long, Q., & Su, W. J. (2021). Exploring Deep Neural Networks via Layer-Peeled Model: Minority Collapse in Imbalanced Training. ArXiv:2101.12699 [Cs, Math, Stat].

Two Applications

Balanced Data

neural collapse: the emergence of certain geometric patterns of the last-layer features and the last-layer classifiers, when the neural network for balanced classification problems is well-trained in the sense that it is toward not only zero misclassification error but also negligible cross-entropy loss.

  • the last-layer features from the same class tend to be very close to their class mean
  • these $K$ class means centered at the global-mean have the same length and form the maximally possible equal-sized angles between any, i.e., it collapses to the vertices of a simplex equiangular tight frame (ETF) up to scaling
  • the last-layer classifiers become dual to the class means in the sense that they are equal to each other for each class up to a scaling factor
  • the network’s decision collapses to simply choosing the class with the closest Euclidean distance between its class mean and the activations of the test sample.

the paper will show the phenomenon emerges in the surrogate model instead of the multiple-layer neural networks.

Neural collapse occurs in the Layer-Peeled Model

Imbalances Data

In the Layer-Peeled Model, the last-layer classifiers corresponding to the minority classes collapse to a single vector when the imbalance ratio $R$ is sufficiently large.

In slightly more detail,

  • when $R$ is below a threshold, the minority classes are distinguishable in terms of their last classifiers
  • when $R$ is above the threshold, they become indistinguishable

The Minority Collapse phenomenon reveals the fundamental difficulty in using deep learning for classification when the dataset is widely imbalanced, even in terms of optimization, not to mention generalization.

Layer-Peeled Model

Screenshot from 2021-09-25 11-11-18

For Explaining Neural Collapse

Cross-Entropy Loss

Screenshot from 2021-09-25 11-14-10

Extensions to Other Loss Functions

Contrastive Loss

Screenshot from 2021-09-25 11-15-54

Softmax-based Loss

Screenshot from 2021-09-25 11-17-13

For Predicting Minority Collapse


Published in categories Note