FALIP: Visual Prompt as Foveal Attention Boosts CLIP Zero-Shot Performance

Jiedong Zhuang, Jiaqi Hu, Lianrui Mu, Rui Hu, Xiaoyu Liang, Jiangnan Ye, Haoji Hu
Zhejiang University
Corresponding authors.

FALIP enhances region awareness of a pretrained CLIP model without fine-tuning.

Abstract

CLIP has achieved impressive zero-shot performance after pretraining on a large-scale dataset consisting of paired image-text data. Previous works have utilized CLIP by incorporating manually designed visual prompts like colored circles and blur masks into the images to guide the model's attention, showing enhanced zero-shot performance in downstream tasks. Although these methods have achieved promising results, they inevitably alter the original information of the images, which can lead to failure in specific tasks.

We propose a train-free method Foveal-Attention CLIP (FALIP), which adjusts the CLIP's attention by inserting foveal attention masks into the multi-head self-attention module. We demonstrate FALIP effectively boosts CLIP zero-shot performance in tasks such as referring expressions comprehension, image classification, and 3D point cloud recognition. Experimental results further show that FALIP outperforms existing methods on most metrics and can augment current methods to enhance their performance.

Interaction with FALIP

We can enhance CLIP's region awareness by using a variety of visual prompts, even if these prompts have not appeared in the training data.

Comparison with Existing Methods

Overview of existing method and FALIP. Left is the flow of the visual prompt method. They perform image editing (such as covering colored boxes, cropping, drawing circles, pasting blur masks, etc.) enabling CLIP to perceive specific regions. Bottom right is FALIP, which unifies the previous methods. It does not require design of the prompt format and does not alter the content of the original image.

Zero-shot Referring Expression Comprehension. The performance of FALIP surpasses existing methods.

Various
Methods
RefCOCO RefCOCO+ RefCOCOg
TestA TestB Val TestA TestB Val Test Val
CLIP 13.5 19.2 15.7 13.6 19.6 16.3 19.1 18.1
CPT 36.1 30.3 32.2 35.2 28.8 31.9 36.5 36.7
RedCircle 38.8 30.5 34.9 41.7 31.9 37.7 39.7 39.7
FALIP 41.4 33.2 37.5 44.4 37.6 40.3 45.4 45.6

Zero-shot Classification. Contrary to visual prompts, FALIP achieves higher classification accuracy than CLIP.

Various
Methods
StanfordDogs CUB-200-2011 ImageNet-S Waterbirds
Top1 Top5 Top1 Top5 Top1 Top5 Top1
CLIP 56.5 85.2 54.2 83.7 64.9 88.4 78.2
RedCircle 52.4 82.8 44.2 77.0 62.8 86.5 77.5
Blur 51.9 81.9 39.1 79.0 53.8 77.6 78.1
FALIP 58.3 86.0 54.3 83.6 67.3 89.9 79.7

Zero-shot 3D Point Cloud Recognition. FALIP exhibits higher recognition accuracy compared to CLIP.

Methods ModelNet40 ScanObjectNN
CLIP 16.5 14.6
FALIP 18.6 15.3

Referring Expression Comprehension

Given the referring expression on the left, FALIP is able to predict the corresponding object in the right image. The keywords are highlighted in orange.

Attention Visualization

FALIP demonstrates its ability to better focus on the target objects rather than irrelevant objects in the background.