#You Can Only Watch the Past:Track Attention Network for Online Spatio-Temporal Action Detection
- We recommend you to use Anaconda to create a conda environment:
conda create -n tan python=3.6
- Then, activate the environment:
conda activate tan
- Requirements:
pip install -r requirements.txt
You can download UCF24 from the following links:
- Google drive
Link: https://drive.google.com/file/d/1Dwh90pRi7uGkH5qLRjQIFiEmMJrAog5J/view?usp=sharing
- BaiduYun Disk
Link: https://pan.baidu.com/s/11GZvbV0oAzBhNDVKXsVGKg
Password: hmu6
- UCF101-24
For example:
python -m torch.distributed.launch --nnodes 1 --nproc_per_node 2 train.py --cuda -d ucf24 --data_root /data1/su/datasets/UCF24-YOWO/ -bs 32 -tbs 16 -K 16 -accu 8 -v yowo_v3_large --max_epoch 7 --lr_epoch 2 3 4 5 --eval -ct 0.05 --distributed --sybn (--resume weights/ucf24/yowo_v3_large/yowo_v3_large_epoch_1.pth) (--untrimmed_trainning) (--track_mode)
- Frame-mAP:
python eval.py --cuda -d ucf24 --data_root /data1/su/datasets/UCF24-YOWO/ -tbs 16 -v tan_large --weight weights/ucf24/tan_large/tan_large_epoch_0.pth -ct 0.05 --cal_frame_mAP
- Video mAP:
python eval.py --cuda -d ucf24 --data_root /data1/su/datasets/UCF24-YOWO/ -tbs 16 -v tan_large --weight weights/ucf24/tan_large/tan_large_epoch_0.pth -ct 0.05 --cal_video_mAP --link_method viterbi
Model weights and detection results can be downloaded from the cloud drive link below. Link: https://pan.baidu.com/s/1j4890Y6rtzycWG_jQeyd-Q?pwd=m5xd password: m5xd
Currently, only part of the code is released. The complete code will be released after the paper is accepted.