CLIP-Driven Prototype Network for Few-Shot Semantic Segmentation

Recent research has shown that visual–text pretrained models perform well in traditional vision tasks. CLIP, as the most influential work, has garnered significant attention from researchers. Thanks to its excellent visual representation capabilities, many recent studies have used CLIP for pixel-lev...

Full description

Bibliographic Details
Published in:Entropy
Main Authors: Shi-Cheng Guo, Shang-Kun Liu, Jing-Yu Wang, Wei-Min Zheng, Cheng-Yu Jiang
Format: Article
Language:English
Published: MDPI AG 2023-09-01
Subjects:
Online Access:https://www.mdpi.com/1099-4300/25/9/1353