This branch includes the official Detectron2 (Pytorch) implementation and pre-trained models for our paper:
Abstract: Cross-domain weakly supervised object detection (CDWSOD) aims to adapt the detection model to a novel target domain with easily acquired image-level annotations. How to align the source and target domains is critical to the CDWSOD accuracy. Existing methods usually focus on partial detection components for domain alignment. In contrast, this paper considers that all the detection components are important and proposes a Holistic and Hierarchical Feature Alignment (H2FA) R-CNN. H2FA R-CNN enforces two image-level alignments for the backbone features, as well as two instance-level alignments for the RPN and detection head. This coarse-to-fine aligning hierarchy is in pace with the detection pipeline, i.e., processing the image-level feature and the instance-level features from bottom to top. Importantly, we devise a novel hybrid supervision method for learning two instance-level alignments. It enables the RPN and detection head to simultaneously receive weak/full supervision from the target/source domains. Combining all these feature alignments, H2FA R-CNN effectively mitigates the gap between the source and target domains. Experimental results show that H2FA R-CNN significantly improves cross-domain object detection accuracy and sets new state of the art on popular benchmarks.
- Linux with CUDA ≥ 9.2, gcc & g++ ≥ 4.9, Python ≥ 3.6
- PyTorch ≥ 1.4 and torchvision that matches the PyTorch installation. Note, please check PyTorch version matches that is required by Detectron2
- Detectron2 v0.2, following Detectron2 installation instructions.
git clone https://github.com/XuYunqiu/H2FA_R-CNN.git
cd H2FA_RCNN
python -m pip install -e .
-
Download PASCAL VOC 07 and 12 datasets from PASCAL VOC
-
Download Clipart, Watercolor and Comic datasets from cross domain detection
-
Download Cityscapes and Fogy Cityscapes datasets from da-faster-rcnn-PyTorch
For a few datasets that H2FA R-CNN natively supports, the datasets are assumed to exist in a directory called "datasets/", under the directory where you launch the program. They need to have the following directory structure:
DETECTRON2_DATASETS/
├── results
└── {VOC2007,VOC2012,Clipart,Watercolor,Comic,fogycityscapes}/
├── Annotations/
├── ImageSets/
└── JPEGImages/
You can set the location for builtin datasets by export DETECTRON2_DATASETS=/path/to/datasets. If left unset, the default is ./datasets relative to your current working directory.
Note, the size of some target domain images is inconsistent with that provided in their annotations. You may need to resize these images according to the annotations. These images are listed in image_list.
To train a model, run
cd tools/
python train_net.py --num-gpus 2 \
--config-file ../configs/CrossDomain-Detection/h2fa_rcnn_R_101_DC5_clipartall.yaml
The configs are made for 2-GPU training. To train on 1 GPU, you may need to change some parameters, e.g.:
python train_net.py \
--config-file ../configs/CrossDomain-Detection/h2fa_rcnn_R_101_DC5_clipartall.yaml \
--num-gpus 1 SOLVER.IMS_PER_BATCH 2 SOLVER.BASE_LR 0.0025 SOLVER.STEPS 48000,64000 SOLVER.MAX_ITER 72000
To evaluate the trained models, use
python train_net.py \
--config-file ../configs/CrossDomain-Detection/h2fa_rcnn_R_101_DC5_clipartall.yaml \
--eval-only MODEL.WEIGHTS /path/to/checkpoint_file
Note, the above command lines are a simple example for VOC -> Clipartall adaptation. You can change the config file for other datasets.
Pre-trained models with R101-DC5 backbone on seven datasets are available. All our models are trained on 2 NVIDIA V100-32G GPUs.
Name | iterations | mAP |
pre-train model | metrics |
---|---|---|---|---|
VOC -> Clipartall | 36k | 69.8 | Google Drive | Baidu Pan | Google Drive | Baidu Pan |
VOC -> Cliparttest | 24k | 55.3 | Google Drive | Baidu Pan | Google Drive | Baidu Pan |
VOC -> Watercolor | 24k | 59.9 | Google Drive | Baidu Pan | Google Drive | Baidu Pan |
VOC -> Comic | 24k | 46.4 | Google Drive | Baidu Pan | Google Drive | Baidu Pan |
VOC -> Watercolorextra | 36k | 62.6 | Google Drive | Baidu Pan | Google Drive | Baidu Pan |
VOC -> Comicextra | 36k | 53.0 | Google Drive | Baidu Pan | Google Drive | Baidu Pan |
Cityscapes -> Foggy Cityscapes | 24k | 47.4 | Google Drive | Baidu Pan | Google Drive | Baidu Pan |
- Detectron2 implementation (this branch)
- PaddleDetection implementation (ppdet branch)
- Code refactoring to make Detectron2 a third party
If you find this project useful for your research, please use the following BibTeX entry.
@inproceedings{xu2022h2fa,
title={{H$^2$FA R-CNN}: Holistic and Hierarchical Feature Alignment for Cross-domain Weakly Supervised Object Detection},
author={Xu, Yunqiu and Sun, Yifan and Yang, Zongxin and Miao, Jiaxu and Yang, Yi},
booktitle={Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2022},
pages={14329-14339},
}
This project is released under the Apache 2.0 license.
We build the project based on Detectron2 and PaddleDetection. Thanks for their contributions.
If you have any questions, please drop me an email: [email protected]