Rethinking Directional Parameterization in Neural Implicit Surface Reconstruction

ECCV 2024

1Tokyo Institute of Technology 2Preferred Networks, Inc.

Multi-view 3D surface reconstruction using neural implicit representations has made notable progress by modeling the geometry and view-dependent radiance fields within a unified framework. However, their effectiveness in reconstructing objects with specular or complex surfaces is typically biased by the directional parameterization used in their view-dependent radiance network. Viewing direction and reflection direction are the two most commonly used directional parameterizations but have their own limitations. Typically, utilizing the viewing direction usually struggles to correctly decouple the geometry and appearance of objects with highly specular surfaces, while using the reflection direction tends to yield overly smooth reconstructions for concave or complex structures. In this paper, we analyze their failed cases in detail and propose a novel hybrid directional parameterization to address their limitations in a unified form. Extensive experiments demonstrate the proposed hybrid directional parameterization consistently delivered satisfactory results in reconstructing objects with a wide variety of materials, geometry and appearance, whereas using other directional parameterizations faces challenges in reconstructing certain objects. Moreover, the proposed hybrid directional parameterization is nearly parameter-free and can be effortlessly applied in any existing neural surface reconstruction method.

TL;DR: Common directional parameterization of view-dependent radiance, either viewing or reflection direction, carries inherent limitations in certain scenes, and we propose a new unified form of hybrid directional parameterization.

Observations: scenarios where existing directional parameterizations succeed and struggle.

Analysis on the use of reflection direction

For the use of reflection direction, we observed a distinct aspect from the viewing direction: calculating the reflection direction requires information from the geometry being optimized (i.e., normal). We provide two specific examples to analyze the impact of incorporating normals on view-dependent radiance modeling.

1) Influence by unrelated surface area

Normals at sampled points may be influenced by unrelated surfaces other than the intersection surface.

Interpolate start reference image.

2) Scattered distribution of normals and reflection directions

Smooth surfaces yield similarity in normals and reflection directions for the sampled points, while surfaces with intricate local details, e.g., concavities, induce a scattered distribution of normals and reflection directions (Ref. dirs), particularly pronounced slightly distant from the vicinity of the zero-level set, which adversely affects geometry optimization.

Interpolate start reference image.

Smooth surface (typically in the early stage of training)

Loading...
Interpolation end reference image.

Target complex surface

Use the slider here for a better visualization of the changes in normal and reflection direction.

Our solution

We find that, 1) for sampled points near the intersecting surface, the normals here accurately reflect the normals of the intersecting surface, thereby enabling the reflection direction to better model the interaction between light and surface; while 2) for other sampled points along the ray, using the reflection direction could potentially lead to the issues demonstrated above, and using the viewing direction at these sampled points can avoid these issues. Motivated by these thoughts, we propose a spatial-aware directional representation, which transitions from reflection direction \(\mathbf{d}_{\mathrm{ref}}\) to viewing direction \(\mathbf{d}_{\mathrm{view}}\) based on the distance to the surface (i.e., the absolute value of SDF).

\[\mathbf{d}_\mathrm{hyb} = \mathrm{normalize}(\alpha \cdot \mathbf{d}_\mathrm{ref} + (1-\alpha) \cdot \mathbf{d}_\mathrm{view})\]

where $\alpha \in [0, 1]$ represents the blend weight based on the SDF value:

\[\alpha = \mathrm{exp}(-\gamma\cdot \mathrm{detach}(|f(\mathbf{\mathbf{x}})|))\]

BibTeX

@inproceedings{jiang2024rethinking,
    author    = {Jiang, Zijie and Xu, Tianhan and Kato, Hiroharu},
    title     = {Rethinking Directional Parameterization in Neural Implicit Surface Reconstruction},
    booktitle = {European Conference on Computer Vision (ECCV)},
    year      = {2024},
}