Skip to content

Latest commit

 

History

History
68 lines (44 loc) · 1.76 KB

README.md

File metadata and controls

68 lines (44 loc) · 1.76 KB

LF-MDet

Codes for Low-rank Multimodal Remote Sensing Object Detection with Frequency Filtering Experts

Xu Sun, Yinhui Yu*, and Qing Cheng

Update

  • [2024/7] This code will be released soon.

⚙ Network Architecture

🌐 Usage

1. Virtual Environment

conda env create -f environment.yml

2. ultralytics YOLOv8

The convert_yolo.py script is designed to facilitate the conversion of object detection annotations from the OpenMMLab format to the Ultralytics YOLO format.

The YOLOv8l model configured with different modalities and various data augmentation methods can be downloaded via the following links:

3. LF-MDet Training

Run

source ~/.bashrc
conda activate openmmlab
which python
nohup python3  {config_path}\
    --work-dir  {checkpoint_path}\
    --gpu-ids 0 > {log_path}.log 2>&1 &

and the trained model is available in './checkpoints/'.

4. LF-MDet Testing

Run

source ~/.bashrc
conda activate openmmlab
which python
python3  {config_path}\
    --work-dir  {checkpoint_path}\
    --eval bbox \
    --gpu-ids 0

5. The visualizing detection results of our approach on the VEDAI and DroneVehicle datasets.