일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | ||||||
2 | 3 | 4 | 5 | 6 | 7 | 8 |
9 | 10 | 11 | 12 | 13 | 14 | 15 |
16 | 17 | 18 | 19 | 20 | 21 | 22 |
23 | 24 | 25 | 26 | 27 | 28 |
- Digital Transformation in Real Estate
- Rendering
- 3Dscan
- 리빌더AI
- PointCloud
- nft
- ai
- VR
- 3D모델링
- modeling
- 3D scanning
- #shoppingmall #fashion #game #mario #mariocart #vr
- Online Model Houses
- 3D
- VRIN3D
- rebuilderAI
- Space
- scan
- scanning
- VRIN
- #meta #metaverse #spatialcomputing #xr #ar #commerce #3dmodeling #3d #sketchfab #modeling #rendering #ai #reconstruction #3dscan #3dscanning #3dscanner #generativeai #rebuilderai #vrin3d #vrinscan #vrinscanner #vrin3dscanner #vrin
- startup
- virtual
- 3Dmodeling
- 3dmodel
- Sod
- metaverse
- AR
- 리빌더에이아이
- 브린
- Today
- Total
RebuilderAI_Blog
[Tech] NeRFactor: Neural Factorization of Shape and Reflectance Under an Unknown Illumination | NeRFactor Review 본문
[Tech] NeRFactor: Neural Factorization of Shape and Reflectance Under an Unknown Illumination | NeRFactor Review
다육 2022. 7. 22. 18:32by Geunho Jung (AI researcher / R&D)
Today we will share the content about "NeRFactor".
It is about the NeRFactor paper review.
Have you heard NeRFactor before?
No? Then let's talk about it together.
More available information is here: https://rebuilderai.github.io/
Table of Contents
1. Introduction
2. Method
(1) Assuming
(2) Shape
(3) Reflectance
(4) Rendering
Paper
NeRFactor: Neural Factorization of Shape and Reflectance Under an Unknown Illumination Xiuiming Zhang. Pratul P. Srinivasan, Boyang Deng, Paul Debevec, William T. Freeman, Jonathan T. Barron TOG 2021 (Proc. SIGGRAPH Asia)
1. Introduction
The problem that we focus on this paper is about reconstructing the geometry of some Object and Material properties from the Multi-view image. In particular, when receiving Multi-view images and cameras corresponding to the images, the Reflectance that changes spatially with the Object's shape is reconstructed. (Supposing Unknown lighting conditions) The key idea to solve this problem in this response paper, distill the volumetric geometry of NeRF(Neural Radiance Fields, ECCV 2020) to surface representation, and suggest refining Geometry together solving both Reflectance and environment lighting. They introduce it as NeRFactor representation.
NeRFactor (Neural Factorization) is optimized to reconstruct 3D neural field. 3D neural field represents Surface normal, Light, visibility, Albedo, Bidirectional Reflectance Distribution Functions (BRDFs), uses Re-rendering loss, Smoothness prior, Data-driven BRDF prior without special supervision and optimizes it. Therefore, NeRFactor receives Multi-view image (+camera) for input and is optimized by MLP to express 3D neural fields well. This can be used for applications such as Re-lighting, Material editing.
2. Method
(1) Assuming
1. Input: Multi-view image + camera
2. Output: Surface normal, Light visibility, Albedo, Reflectance
3. One unknown illumination condition
(2) Shape
When the Multi-view image and camera are received as input, initial geometry is calculated first through NeRF (Optimized Status). As mentioned above, calculate Density through optimized NeRF and use it as a Continuous surface representation. To explain in detail, calculate the Density through the NeRF, and calculate the Surface point after calculating Depth (Distance t).
The surface point is used as MLP input to calculate the Visibility, BRDF, Albedo, Normal as you see in [equation1].
First, calculate the gradient of the density of NeRF to obtain Normal. Calculating Normal with this method results in Artifact as shown in [attachment2], so it is Re-parameterized through Normal MLP and optimized through [equation2].
Second, calculate Visibility from the density of the NeRF. Similarly, since noise occurs as shown in [attachment3], Re-parameterize through Visibility MLP, and it is optimized through [equation3].
(3) Reflectance
To model the Reflectance, we first learned Albedo at the input Surface location through the Albedo MLP.
Next, to represent the BRDF, we calculate the Latent code through BRDF identity MLP like [attachment1] and use it as an input to the BRDF MLP. BRDF MLP uses the Generative Latent Optimization approach (Pre-trained with MERL datasets) to convert Latent code, Albedo, and angle of incidence and reflection into Rusinkiewicz coordinates, receive as input and calculate Reflectance.
(4) Rendering
Render the final color using Surface normal, Visibility, Albedo, BRDF, and Lighting calculated through the process so far. Rendering is calculated by the Rendering equation and the equation is as follows.
The final Loss is utilized as Reconstruction loss by adding equations 1) to 4)
Thank you for reading🥰
Come and visit our Instagram
https://www.instagram.com/rebuilderai_official/
'Technique' 카테고리의 다른 글
[Tech] Structure-from-Motion: COLMAP (0) | 2022.08.18 |
---|---|
[Tech] U^2-Net: Going Deeper with Nested U-structure for Salient Object Detection (0) | 2022.08.11 |
[Tech] Surface Reconstruction with Implicit Representation (0) | 2022.08.09 |
[Tech] Neural Radiance Fields (0) | 2022.07.28 |
[Tech] Object Recognition | SOD & TOD (0) | 2022.07.14 |