RebuilderAI_Blog

[Tech] NeRFactor: Neural Factorization of Shape and Reflectance Under an Unknown Illumination | NeRFactor Review 본문

Technique

[Tech] NeRFactor: Neural Factorization of Shape and Reflectance Under an Unknown Illumination | NeRFactor Review

다육 2022. 7. 22. 18:32

by Geunho Jung (AI researcher / R&D)

 

Today we will share the content about "NeRFactor".

It is about the NeRFactor paper review.

Have you heard NeRFactor before?

No? Then let's talk about it together. 

 

https://rebuilderai.github.io/reconstruction,%20re-lighting,%20material%20editing/2022/07/19/NeRFactor.html#introduction

 

NeRFactor: Neural Factorization of Shape and Reflectance Under an Unknown Illumination | RebuilderAI

SIGGRAPH ASIA 2021

RebuilderAI.github.io

 

More available information is here: https://rebuilderai.github.io/

 

Home | RebuilderAI

리빌더에이아이 테크 블로그

RebuilderAI.github.io


Table of Contents

1. Introduction
2. Method

   (1) Assuming

   (2) Shape

   (3) Reflectance

   (4) Rendering

 

Paper

[paper]

NeRFactor: Neural Factorization of Shape and Reflectance Under an Unknown Illumination Xiuiming Zhang. Pratul P. Srinivasan, Boyang Deng, Paul Debevec, William T. Freeman, Jonathan T. Barron TOG 2021 (Proc. SIGGRAPH Asia)

 


1. Introduction

The problem that we focus on this paper is about reconstructing the geometry of some Object and Material properties from the Multi-view image. In particular, when receiving Multi-view images and cameras corresponding to the images, the Reflectance that changes spatially with the Object's shape is reconstructed. (Supposing Unknown lighting conditions) The key idea to solve this problem in this response paper, distill the volumetric geometry of NeRF(Neural Radiance Fields, ECCV 2020) to surface representation, and suggest refining Geometry together solving both Reflectance and environment lighting. They introduce it as NeRFactor representation.

 

NeRFactor (Neural Factorization) is optimized to reconstruct 3D neural field. 3D neural field represents Surface normal, Light, visibility, Albedo, Bidirectional Reflectance Distribution Functions (BRDFs), uses Re-rendering loss, Smoothness prior, Data-driven BRDF prior without special supervision and optimizes it. Therefore, NeRFactor receives Multi-view image (+camera) for input and is optimized by MLP to express 3D neural fields well. This can be used for applications such as Re-lighting, Material editing. 

 

 


2. Method

[attachment1]

(1) Assuming

1. Input: Multi-view image + camera
2. Output: Surface normal, Light visibility, Albedo, Reflectance
3. One unknown illumination condition

 

(2) Shape

When the Multi-view image and camera are received as input, initial geometry is calculated first through NeRF (Optimized Status). As mentioned above, calculate Density through optimized NeRF and use it as a Continuous surface representation. To explain in detail, calculate the Density through the NeRF, and calculate the Surface point after calculating Depth (Distance t). 

[equation1] Surface point

The surface point is used as MLP input to calculate the Visibility, BRDF, Albedo, Normal as you see in [equation1]. 

First, calculate the gradient of the density of NeRF to obtain Normal. Calculating Normal with this method results in Artifact as shown in [attachment2], so it is Re-parameterized through Normal MLP and optimized through [equation2]. 

[attachment2]
[equation2] Normal loss function

Second, calculate Visibility from the density of the NeRF. Similarly, since noise occurs as shown in [attachment3], Re-parameterize through Visibility MLP, and it is optimized through [equation3]. 

[attachment3] Light visibility
[equation3] Visibility loss function

 

(3) Reflectance

To model the Reflectance, we first learned Albedo at the input Surface location through the Albedo MLP.

[equation4] Albedo loss

Next, to represent the BRDF, we calculate the Latent code through BRDF identity MLP like [attachment1] and use it as an input to the BRDF MLP. BRDF MLP uses the Generative Latent Optimization approach (Pre-trained with MERL datasets) to convert Latent code, Albedo, and angle of incidence and reflection into Rusinkiewicz coordinates, receive as input and calculate Reflectance.

 

(4) Rendering

Render the final color using Surface normal, Visibility, Albedo, BRDF, and Lighting calculated through the process so far. Rendering is calculated by the Rendering equation and the equation is as follows.

[equation5] Rendering equation

The final Loss is utilized as Reconstruction loss by adding equations 1) to 4)

 

 


Thank you for reading🥰

Come and visit our Instagram

https://www.instagram.com/rebuilderai_official/