Learning Residual Color for Novel View Synthesis

IEEE Trans Image Process. 2022:31:2257-2267. doi: 10.1109/TIP.2022.3154242. Epub 2022 Mar 11.

Abstract

Scene Representation Networks (SRN) have been proven as a powerful tool for novel view synthesis in recent works. They learn a mapping function from the world coordinates of spatial points to radiance color and the scene's density using a fully connected network. However, scene texture contains complex high-frequency details in practice that is hard to be memorized by a network with limited parameters, leading to disturbing blurry effects when rendering novel views. In this paper, we propose to learn 'residual color' instead of 'radiance color' for novel view synthesis, i.e., the residuals between surface color and reference color. Here the reference color is calculated based on spatial color priors, which are extracted from input view observations. The beauty of such a strategy lies in that the residuals between radiance color and reference are close to zero for most spatial points thus are easier to learn. A novel view synthesis system that learns the residual color using SRN is presented in this paper. Experiments on public datasets demonstrate that the proposed method achieves competitive performance in preserving high-resolution details, leading to visually more pleasant results than the state of the arts.