Skip to content
/ HPointLoc Public archive

Open dataset and framework for visual place recognition and localization

License

Notifications You must be signed in to change notification settings

cds-mipt/HPointLoc

Repository files navigation

HPointLoc: open dataset and framework for indoor visual localization based on synthetic RGB-D images

License: MIT

This repository provides a novel framework PNTR for exploring the capabilities of a new indoor dataset - HPointLoc, specially designed to explore detection and loop closure capabilities in Simultaneous Localization and Mapping (SLAM).

HPointLoc is based on the popular Habitat simulator from 49 photorealistic indoor scenes from the Matterport3D dataset and contains 76,000 frames.

When forming the dataset, considerable attention was paid to the presence of instance segmentation of scene objects, which will allow it to be used in new emerging semantic methods for place recognition and localization

The dataset is split into two parts: the validation HPointLoc-Val, which contains only one scene, and the complete HPointLoc-All dataset, containing all 49 scenes, including HPointLoc-Val

HPointLoc dataset is available by the link: https://drive.google.com/drive/folders/1Tic7SuIAASSBpxa5j4Zq_0VaF9rdDav2

Experimental results

The experiments were held on the HPointLoc-Val and HPointLoc-ALL datasets.

Quick start to evaluate PNTR pipeline

git clone --recurse-submodules https://github.com/cds-mipt/HPointLoc.git
cd HPointLoc
conda env create -f environment.yml
conda activate PTNR_pipeline 
python /path/to/HPointLoc_repo/pipelines/utils/exctracting_dataset.py --dataset_path /path/to/dataset/HPointLoc_dataset
python pipelines/pipeline_evaluate.py --dataset_root /path/to/extracted_dataset --image-retrieval patchnetvlad --keypoints-matching superpoint_superglue --optimizer-cloud teaser

About

Open dataset and framework for visual place recognition and localization

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •