Comparative Analysis on Classical Meta-Metric Models for Few-Shot Learning

Few-shot learning are methods and scenarios learned from a small amount of labeled data. While recent meta-metric learning methods have made significant progress, there are still questions about what is the key point of these methods and how they work. To address these problems, in this paper, we ev...

Full description

Bibliographic Details
Main Authors: Sai Yang, Fan Liu, Ning Dong, Jiaying Wu
Format: Article
Language:English
Published: IEEE 2020-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9139379/
Description
Summary:Few-shot learning are methods and scenarios learned from a small amount of labeled data. While recent meta-metric learning methods have made significant progress, there are still questions about what is the key point of these methods and how they work. To address these problems, in this paper, we evaluate the effects of different parts in classical models. To be specific, we 1) use four typical networks AlexNet, VGG16, GoogLeNet, and ResNet50 to replace the original feature extraction part in the Matching Network, Prototypical Network, and Relation Network, and compare the best results with 17 state-of-the-art meta-metric learning algorithms. 2) fix the feature extraction part of the Matching Network, Prototypical Network and Relation Network, and change the similarity measurement part of each into L1, L2, Cosine. 3) conduct above three models on datasets of different granularity. The experimental results show that for all models evaluated, the addition of non-pretrained networks will make the classification results worse, which shows that it is easy to overfit when using deep networks for few-shot learning. Changes in similarity measurement methods have a significant impact on results, which shows the importance to choose a suitable measurement. Moreover, there are differences in performance on different granularity datasets.
ISSN:2169-3536