Skip to content

Image analysis tool for solar modules, assisted by deep learning.

License

Notifications You must be signed in to change notification settings

hackingmaterials/pv-vision

Repository files navigation

pv-vision

GitHub license Requires Python 3.8+ PyPI DOI

⚠️ PV-Vision is still under test for different operating environments. Please raise an issue if you cannot use it on your computer.

⚠️ We are updating our tools actively so some of the tutorials may be out of date.

Currently the repo is maintained by one person only, so everyone is welcome to pull and push requests. Thank you for your contribution for making this tool better.

A toy system about how to serve the crack segmentation model can be found here.

Installation

  1. create a virtual environment with conda
conda create -n pv-vision python=3.10
conda activate pv-vision
  1. Install from source (Recommended for the current beta version)
git clone https://github.com/hackingmaterials/pv-vision.git
cd pv-vision
pip install .
  1. Install from Pypi (alternative)
pip install pv-vision
  1. To enable CUDA and GPU acceleration, install Pytorch with cudatoolkit

Citation

PV-Vision package covers several topics in solar cell image analysis and is still under expansion. We published several papers related to various topics in this package. Please cite our papers accordingly.

If your work is about automatic defect identification, please cite the following paper:

@article{chen2022automated,
  title={Automated defect identification in electroluminescence images of solar modules},
  author={Chen, Xin and Karin, Todd and Jain, Anubhav},
  journal={Solar Energy},
  volume={242},
  pages={20--29},
  year={2022},
  publisher={Elsevier}
}

If your work is about automatic crack segmentation and feature extraction, please cite the following paper:

# Crack segmentation paper
@article{chen2023automatic,
  title={Automatic Crack Segmentation and Feature Extraction in Electroluminescence Images of Solar Modules},
  author={Chen, Xin and Karin, Todd and Libby, Cara and Deceglie, Michael and Hacke, Peter and Silverman, Timothy J and Jain, Anubhav},
  journal={IEEE Journal of Photovoltaics},
  year={2023},
  publisher={IEEE}
}

We also published our data set as a benchmark for crack segmentation. If you use our data set, please cite the following one:

# Crack segmentation dataset
@misc{chen2022benchmark,
  title={A Benchmark for Crack Segmentation in Electroluminescence Images},
  doi={10.21948/1871275},
  url={https://datahub.duramat.org/dataset/crack-segmentation},
  author={Chen, Xin and Karin, Todd and Libby, Cara and Deceglie, Michael and Hacke, Peter and Silverman, Timothy and Gabor, Andrew and Jain, Anubhav},
  year={2022},
}

In general, if you want to cite the PV-Vision package or this repository, please use the following BibTex:

@misc{PV-Vision,
  doi={10.5281/ZENODO.6564508},
  url={https://github.com/hackingmaterials/pv-vision},
  author={Chen, Xin},
  title={pv-vision},
  year={2022},
  copyright={Open Access}
}

Examples of citing our works in latex can be:

To enable the automatic analysis of EL images, an open-source package PV-VISION~\cite{PV-Vision} was developed.

Individual defects were located and classified using object detection model in a previous work~\cite{chen2022automated}.

Cracks were segmented using a semantic segmentation model and crack features such as isolated area or length were automatically extracted in a previous work~\cite{chen2023automatic}. The corresponding dataset was publisehd as a benchmark~\cite{chen2022benchmark}.

Overview

This package allows you to analyze electroluminescene (EL) images of photovoltaics (PV) modules. The methods provided in this package include module transformation, cell segmentation, crack segmentation, defective cells identification, etc. Future work will include photoluminescence image analysis, image denoising, barrel distortion fixing, etc.

You can either use the package pv_vision and write your own codes following the instruction in tutorials, or you can directly run our pipeline.sh to do automated defects indentification. When pipeline.sh is used, YOLO model will be applied to do predictions in default. The output will give you the analysis from the model.

Our trained neural network models can be downloaded here.

Currently the model weights are:

  1. Folder "crack_segmentation" is used for predicting the pixels that belong to cracks, busbars, etc. using semantic segmentation.

  2. Folder "defect_detection" is used to do object detection of defective cells.

  3. Folder "cell_classification" is used to do cell classification.

  4. Folder "module_segmentation" is used for perspective transformation of solar module images using semantic segmantation. It will predict the contour of field module images

Analyze data

The tutorials of using PV-Vision can be found in folder tutorials. The tutorials cover perspective transformation, cell segmentation, model inference and model output analysis.

Public dataset

We published one of our datasets as a benchmark for crack segmentation. Images and annotations can be found on DuraMat datahub

Deploy models

There are three ways to deply our deep learning models:

1. Use Python (Recommended)

Check tutorials of modelhandler.py. This tool allows you to train your own deep learning models.

from pv_vision.nn import ModelHandler

2. Use Supervisely

Upload the model weights to Supervisely and make predictions on this website. The detailed tutorials can be found here and here.

3. Use docker

⚠️ This method may be out of date.

You can also run the models using docker.

First make sure you prepare required files as stated in the following folder structure.

Then pull the images

docker pull supervisely/nn-yolo-v3
docker pull supervisely/nn-unet-v2:6.0.26

You should see the two images by running

docker image ls

Start the containers by running

docker run -d --rm -it --runtime=nvidia -p 7000:5000 -v "$(pwd)/unet_model:/sly_task_data/model" --env GPU_DEVICE=0 supervisely/nn-unet-v2:6.0.26 python /workdir/src/rest_inference.py

docker run -d --rm -it --runtime=nvidia -p 5000:5000 -v "$(pwd)/yolo_model:/sly_task_data/model" --env GPU_DEVICE=0 supervisely/nn-yolo-v3 python /workdir/src/rest_inference.py

Here we deploy the UNet to port 7000 and YOLO to port 5000. The path $(pwd)/unet_model or $(pwd)/yolo_model is where we store our model weights. You can download them here.

Check if you successfully run the two dockers by running

docker container ls

After you have deployed the models, run our pipeline script to get the predictions. Note that this pipeline was only designed for object detection and doesn't have active maintenance currently Check our tutorials about how to do crack analysis.

bash pipeline.sh

You will find the predictions in a new folder output.

In general, your folder structure should be like the following. When start the containers, you need to prepare unet_model and yolo_model. When running pipeline.sh, You only need to prepare pipeline, raw_images where stores raw grayscale EL images, scripts where you need to configure the metadata in the parent folder PV-pipeline. The output folder will be created after you run the pipeline.sh.

PV-pipeline
├── unet_model
│   ├── config.json
│   └── model.pt
├── yolo_model
│   ├── config.json
│   ├── model.weights
│   └── model.cfg
├── pipeline.sh
├── raw_images
│   ├── img1.png
│   ├── img2.png
│   ├── img3.png
│   ├── img4.png
│   └── img5.png
├── scripts
│   ├── metadata
│   │   ├── defect_colors.json
│   │   └── defect_name.json
│   ├── collect_cell_issues.py
│   ├── highlight_defects.py
│   ├── move2folders.py
│   └── transform_module_v2.py
└── output
    ├── analysis
    │   ├── cell_issues.csv
    │   ├── classified_images
    │   │   ├── category1
    │   │   │   └── img1.png
    │   │   ├── category2
    │   │   │   ├── img4.png
    │   │   │   └── img2.png
    │   │   └── category3
    │   │       ├── img3.png
    │   │       └── img5.png
    │   └── visualized_images
    │       ├── img1.png
    │       ├── img2.png
    │       ├── img3.png
    │       ├── img4.png
    │       └── img5.png
    ├── transformation
    │   ├── failed_images
    │   └── transformed_images
    │       ├── img1.png
    │       ├── img2.png
    │       ├── img3.png
    │       ├── img4.png
    │       └── img5.png
    ├── unet_ann
    │   ├── img1.png.json
    │   ├── img2.png.json
    │   ├── img3.png.json
    │   ├── img4.png.json
    │   └── img5.png.json
    └── yolo_ann
        ├── img1.png.json
        ├── img2.png.json
        ├── img3.png.json
        ├── img4.png.json
        └── img5.png.json

To do

  1. We will upload some EL images for users to practice after we get approval from our data provider. Done
  2. We will improve the user experience of our tools. We will do more Object-oriented programming (OOP) in the future version. Done
  3. We also developed algoritms of extracting cracks from solar cells. We will integrate the algorithms with PV-Vision. Done
  4. We want to predict the worst degradation amount based on the existing crack pattern. This will also be integrated into PV-Vision. Done
  5. Add neural network modules Done
  6. Add result analysis Done

About

Image analysis tool for solar modules, assisted by deep learning.

Resources

License

Stars

Watchers

Forks

Packages

No packages published