Emerging neural radiance fields (NeRF) are a promising scene representation for computer graphics, enabling high-quality 3D reconstruction and novel view synthesis from image observations.
However, editing a scene represented by a NeRF is challenging, as the underlying connectionist representations such as MLPs or voxel grids are not object-centric or compositional.
In particular, it has been difficult to selectively edit specific regions or objects.
In this work, we tackle the problem of semantic scene decomposition of NeRFs to enable query-based local editing of the represented 3D scenes.
We propose to distill the knowledge of off-the-shelf, self-supervised 2D image feature extractors such as CLIP-LSeg or DINO into a 3D feature field optimized in parallel to the radiance field.
Given a user-specified query of various modalities such as text, an image patch, or a point-and-click selection, 3D feature fields semantically decompose 3D space without the need for re-training, and enables us to semantically select and edit regions in the radiance field.
Our experiments validate that the distilled feature fields (DFFs) can transfer recent progress in 2D vision and language foundation models to 3D scene representations, enabling convincing 3D segmentation and selective editing of emerging neural graphics representations.
TL;DR Neural radiance fields can be edited via decomposition with arbitrary queries and feature fields distilled from pre-trained vision models.
Our distilled feature fields (DFFs) are trained by distilling 2D vision encoders such as LSeg (CLIP-based zero-shot segmentation model) or DINO (self-supervised model with good performances in part correspondence tasks) without any 3D or 2D annotations.
The learned feature fields can map every 3D coordinate to a semantic feature descriptor of that coordinate as well as radinace fields.
We can segment the 3D space and the corresponding radiance fields by querying a feature vector and calculating scores via dot product.
Query-based Scene Decomposition
Distilled feature fields enable NeRF to decompose a specific object with a text query like "flower" or an image-patch query.
Localizing CLIPNeRF via Scene Decomposition
CLIPNeRF optimizes a NeRF scene with a text prompt.
However, naive CLIPNeRF poisons unintentional parts.
We can combine our decomposition method with CLIPNeRF and selectively optimize the target object.
+ Our method
We can edit decomposed objects of NeRF scenes in various ways, e.g., rotate, translate, scale, warp into another scene, colorize, or delete them.
move and deform apple
(Note that the background behind the deleted objects can be noisy or have a hole because it lacks observation.)
LSeg Feature Field
We visualize a feature field distilled from LSeg as a techer network by PCA. The flower scene is from LLFF
Other DINO Feature Fields
We visualize feature fields distilled from DINO as a techer network by PCA. Scenes are from LLFF, shiny dataset, and our dataset.