Skip to content

Latest commit

 

History

History
59 lines (44 loc) · 2.3 KB

README.md

File metadata and controls

59 lines (44 loc) · 2.3 KB

LAFB

Code repository for our paper entilted "Learning Adaptive Fusion Bank for Multi-modal Salient Object Detection" accepted at TCSVT 2024.

arXiv version: https://arxiv.org/abs/2406.01127.

24.7.19. The prediction results and weights based on VGG and ResNet backbones have been updated in the Baidu network disk link below.

Citing our work

If you think our work is helpful, please cite

@article{wang2024learning,
  title={Learning Adaptive Fusion Bank for Multi-modal Salient Object Detection},
  author={Wang, Kunpeng and Tu, Zhengzheng and Li, Chenglong and Zhang, Cheng and Luo, Bin},
  journal={IEEE Transactions on Circuits and Systems for Video Technology},
  year={2024},
  publisher={IEEE}
}

Overview

Framework

avatar

RGB-D SOD Performance

avatar

RGB-T SOD Performance

avatar

Data Preparation

RGB-D and RGB-T SOD datasets can be found here. [baidu pan fetch code: chjo]

Predictions

Saliency maps can be found here. [baidu pan fetch code: uodf] or [google drive]

Pretrained Models

Pretrained parameters can be found here.[baidu pan fetch code: 3ed6] or [google drive]

Usage

Prepare

  1. Create directories for the experiment and parameter files.
  2. Please use conda to install torch (1.12.0) and torchvision (0.13.0).
  3. Install other packages: pip install -r requirements.txt.
  4. Set your path of all datasets in ./Code/utils/options.py.

Train

python train.py

Test

python test_produce_maps.py

Contact

If you have any questions, please contact us ([email protected]).