Skip to content

Commit

Permalink
Improve README
Browse files Browse the repository at this point in the history
  • Loading branch information
Maria Wyrzykowska committed Apr 24, 2024
1 parent 132e4bb commit 1d3da47
Show file tree
Hide file tree
Showing 2 changed files with 57 additions and 8 deletions.
35 changes: 27 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,28 +1,47 @@
<<<<<<< HEAD
# cvdm
=======
# Conditional Variational Diffusion Models

This code implements the Conditional Variational Diffusion Models as described [in the paper](https://arxiv.org/abs/2312.02246).

## Where to get the data?

The datasets that we are using are available online:
- [BioSR](https://github.com/qc17-THU/DL-SR)
- [BioSR](https://github.com/qc17-THU/DL-SR), the data that we are using has been transformed to .npy files
- [ImageNet from ILSVRC2012](https://www.image-net.org/challenges/LSVRC/2012/)
- [HCOCO](https://github.com/bcmi/Image-Harmonization-Dataset-iHarmony4?tab=readme-ov-file) - only used in model evaluation

It is assumed that for:
- BioSR super-resolution task, data can be found in the directory specified as dataset_path in configs/biosr.yaml, in two files, x.npy (input) and y.npy (ground truth)
- BioSR phase task, data can be found in the directory specified as dataset_path in configs/biosr_phase.yaml, in one file, y.npy (ground truth). Input to the model will be generated based on the ground truth.
- ImageNet super-resolution task, data can be found in the directory specified as dataset_path in configs/imagenet_sr.yaml as a collection of JPEG files. Input to the model will be generated based on the ground truth.
- ImageNet phase task, data can be found in the directory specified as dataset_path in configs/imagenet_phase.yaml as a collection of JPEG files. Input to the model will be generated based on the ground truth.
- HCOCO phase evaluation task, data can be found in the directory specified as dataset_path in configs/hcoco_phase_eval.yaml as a collection of JPEG files. Input to the model will be generated based on the ground truth.

## How to prepare environment?

Run the following code:
```
conda create -n myenv python=3.10
conda activate myenv
pip install -r requirements.txt
pip install -e .
```

## How to run the code?
1. Modify the config in `configs/` directory with the path to the data you want to use and the directory for outputs
2. Run the code from the root directory: `python scripts/train.py --config-path $PATH_TO_CONFIG --neptune-token $NEPTUNE_TOKEN`
>>>>>>> 9d9308e (Package CVDM)
## How to run the training code?

1. Download the data.
1. Modify the config in `configs/` directory with the path to the data you want to use and the directory for outputs.
2. Run the code from the root directory: `python scripts/train.py --config-path $PATH_TO_CONFIG --neptune-token $NEPTUNE_TOKEN`.

`--neptune-token` argument is optional.

## How to run the training code?

1. Download the data.
1. Modify the config in `configs/` directory with the path to the data you want to use and the directory for outputs.
2. Run the code from the root directory: `python scripts/eval.py --config-path $PATH_TO_CONFIG --neptune-token $NEPTUNE_TOKEN`.

`--neptune-token` argument is optional.

## License
This repository is released under the MIT License (refer to the LICENSE file for details).

30 changes: 30 additions & 0 deletions configs/hcoco_phase_eval.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
task: "hcoco_phase"

model:
noise_model_type: "unet"
alpha: 0.001
load_weights: null
snr_expansion_n: 1

training:
lr: 0.0001
epochs: 100

eval:
output_path: "outputs/hcoco"
generation_timesteps: 1000
checkpoint_freq: 1000
log_freq: 10
image_freq: 100
val_freq: 200
val_len: 100

data:
dataset_path: "/bigdata/casus/MLID/maria/hcoco_sample"
n_samples: 100
batch_size: 1
im_size: 256

neptune:
name: "Virtual_Stain"
project: "mlid/test"

0 comments on commit 1d3da47

Please sign in to comment.