Meta learning maml

broken image
broken image

Third, we investigate several approaches to make MAML permutation-invariant, among which meta-training a single vector to initialize all the $N$ weight vectors in the classification head performs the best. We find that these permutations lead to a huge variance of accuracy, making MAML unstable in few-shot classification. These $N$ ways, during meta-testing, then have '$N!$' different permutations to be paired with a few-shot task of $N$ novel classes. Concretely, MAML meta-trains the initialization of an $N$-way classifier.

broken image

Second, we find that MAML is sensitive to the class label assignments during meta-testing. First, we find that MAML needs a large number of gradient steps in its inner loop update, which contradicts its common usage in few-shot classification. In this paper, we point out several key facets of how to train MAML to excel in few-shot classification. Nevertheless, its performance on few-shot classification is far behind many recent algorithms dedicated to the problem. Download a PDF of the paper titled How to Train Your MAML to Excel in Few-Shot Classification, by Han-Jia Ye and 1 other authors Download PDF Abstract:Model-agnostic meta-learning (MAML) is arguably one of the most popular meta-learning algorithms nowadays.

broken image