Skip to content

Latest commit

 

History

History
5 lines (3 loc) · 2.66 KB

2406.06521.md

File metadata and controls

5 lines (3 loc) · 2.66 KB

PGSR: Planar-based Gaussian Splatting for Efficient and High-Fidelity Surface Reconstruction

Recently, 3D Gaussian Splatting (3DGS) has attracted widespread attention due to its high-quality rendering, and ultra-fast training and rendering speed. However, due to the unstructured and irregular nature of Gaussian point clouds, it is difficult to guarantee geometric reconstruction accuracy and multi-view consistency simply by relying on image reconstruction loss. Although many studies on surface reconstruction based on 3DGS have emerged recently, the quality of their meshes is generally unsatisfactory. To address this problem, we propose a fast planar-based Gaussian splatting reconstruction representation (PGSR) to achieve high-fidelity surface reconstruction while ensuring high-quality rendering. Specifically, we first introduce an unbiased depth rendering method, which directly renders the distance from the camera origin to the Gaussian plane and the corresponding normal map based on the Gaussian distribution of the point cloud, and divides the two to obtain the unbiased depth. We then introduce single-view geometric, multi-view photometric, and geometric regularization to preserve global geometric accuracy. We also propose a camera exposure compensation model to cope with scenes with large illumination variations. Experiments on indoor and outdoor scenes show that our method achieves fast training and rendering while maintaining high-fidelity rendering and geometric reconstruction, outperforming 3DGS-based and NeRF-based methods.

最近,由于其高质量渲染以及超快的训练和渲染速度,3D高斯涂抹(3DGS)引起了广泛关注。然而,由于高斯点云的非结构化和不规则性,仅仅依赖图像重建损失很难保证几何重建的准确性和多视图一致性。尽管最近出现了许多基于3DGS的表面重建研究,但它们的网格质量通常不令人满意。为了解决这个问题,我们提出了一种快速的基于平面的高斯涂抹重建表示(PGSR),以实现高保真的表面重建,同时确保高质量渲染。具体来说,我们首先引入了一种无偏差深度渲染方法,该方法直接渲染相机原点到高斯平面的距离及其相应的法线图,基于点云的高斯分布,并将两者相除以获取无偏差深度。然后,我们引入单视图几何、多视图光度和几何规则化来保持全局几何精度。我们还提出了一个相机曝光补偿模型,以应对光照变化大的场景。在室内和室外场景的实验表明,我们的方法在保持高保真渲染和几何重建的同时,实现了快速的训练和渲染,超越了基于3DGS和NeRF的方法。