CLIP-Driven Prototype Network for Few-Shot Semantic Segmentation

Recent research has shown that visual–text pretrained models perform well in traditional vision tasks. CLIP, as the most influential work, has garnered significant attention from researchers. Thanks to its excellent visual representation capabilities, many recent studies have used CLIP for pixel-lev...

وصف كامل

التفاصيل البيبلوغرافية
الحاوية / القاعدة:Entropy
المؤلفون الرئيسيون: Shi-Cheng Guo, Shang-Kun Liu, Jing-Yu Wang, Wei-Min Zheng, Cheng-Yu Jiang
التنسيق: مقال
اللغة:الإنجليزية
منشور في: MDPI AG 2023-09-01
الموضوعات:
الوصول للمادة أونلاين:https://www.mdpi.com/1099-4300/25/9/1353