Skip to content

Distinguishing Urban Roads from Non-Road Regions in the Indian Driving Dataset(IDD) Using Binary Deep Learning Segmentation 🛣️

License

Notifications You must be signed in to change notification settings

Dalageo/IDDRoadSegmentation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 

Repository files navigation

RoadImage

Distinguishing Urban Roads from Non-Road Regions in the Indian Driving Dataset(IDD) Using Binary Deep Learning Segmentation 🛣️

This project develops a binary segmentation model capable of distinguishing between Road and Non-Road regions in urban driving images captured in India. Models utilizing pixel-level analysis are provided, including a U-Net model built from scratch, a pretrained U-Net, and a pretrained FPN, with the pretrained U-Net being fine-tuned on the IDD dataset. The outcomes of this project have potential applications in various domains, such as autonomous driving, road maintenance, mapping and infrastructure planning, or traffic management.

More specifically, the segmentation process is based on a U-Net architecture, imported from the segmentation_models PyTorch library, which uses an EfficientNet backbone pre-trained on ImageNet. To improve its performance and ensure robustness across diverse scenarios, several data augmentation techniques are employed during training. These techniques include horizontal and vertical flips, as well as brightness adjustments, enabling the model to effectively learn from a more varied dataset and generalize better to unseen images, while accounting for the complexity and variability inherent in real-world urban environments.

Dataset Description

The dataset used in this project is the Indian Driving Dataset (IDD), which consists of road images and their corresponding segmentation masks. This dataset is specifically designed for binary segmentation tasks, such as distinguishing between Road and Non-Road areas. All images and masks have been resized to uniform dimensions of 512x512 pixels. The dataset is organized into two folders:

  • image_archive contains the images in 3 channels (RGB), including a total of 6993 images. The files are in a .png format and are named as Image_{num}.

  • mask_archive contains 6993 single-channel binary masks corresponding to the road images. The masks are also in a .png format and are named as Mask_{num} to align with the number of the images, making them easy to locate.

The following table represents the summary of the data as well as the number of files, shapes, and their naming conventions:

Directory Description Number of Files Shape Example Naming
image_archive Road images 6993 512x512x3 Image_{num}.png
mask_archive Binary masks (Road/Non-Road) 6993 512x512 Mask_{num}.png

Below is an example of a road image and its corresponding mask aligned pixel-wise. An additional overlay visualization (not included in the dataset) is provided to visually demonstrate how the mask highlights specific regions (e.g., Road vs. Non-Road) in the context of the original image.

Overlay-mask

Setup Instructions

Local PC Local Environment Setup

  1. Clone the repository:

    git clone https://github.com/Dalageo/IDDRoadSegmentation.git
    
  2. Navigate to the cloned directory:

    cd IDDRoadSegmentation
    
  3. Open the IDDRoadSegmentation.ipynb using your preferred Jupyter-compatible environment (e.g., Jupyter Notebook, VS Code, or PyCharm)

  4. Update the dataset, model and output directory paths to point to the location on your local environment.

  5. Run the cells sequentially to reproduce the results.

Acknowledgments

Firstly, I would like to thank Olaf Ronneberger, Philipp Fischer, and Thomas Brox for introducing U-Net in their 2015 paper, "U-Net: Convolutional Networks for Biomedical Image Segmentation.".

Additionally, special thanks to Pavel Iakubovskii for developing and maintaining the segmentation_models pytorch library which was essential to the developement of this project.

License

The segmentation library is primarily licensed under the MIT License, with some files under other licenses. Refer to the LICENSES directory and file statements for details, especially regarding commercial use. Meanwhile, the provided notebook, and accompanying documentation, are licensed under the AGPL-3.0 license. AGPL-3.0 license was chosen to promote open collaboration, ensure transparency, and allow others to freely use, modify, and contribute to the work.

Any modifications or improvements must also be shared under the same license, with appropriate acknowledgment.


MIT-Logo    AGPLv3-Logo

About

Distinguishing Urban Roads from Non-Road Regions in the Indian Driving Dataset(IDD) Using Binary Deep Learning Segmentation 🛣️

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published