Synthesizing controllable, photo-realistic images and videos is one of the fundamental goals of computer graphics. Neural rendering is a rapidly emerging field in image synthesis that allows a compact representation of scenes, and by utilizing neural networks, rendering can be learned from existing observations. Neural Radiance Fields (NeRF) implement an effective combination of Neural Fields and the graphics component Volume rendering. It achieves the first photo-level view synthesis effect using an implicit representation. Unlike previous approaches, NeRF chooses Volume as an intermediate representation to reconstruct an implicit Volume. Although the advantages of NeRF are apparent, there are many drawbacks in the original version of NeRF: it is slow to train and render, requires a large number of perspectives, can only represent static scenes, and the trained NeRF representation does not generalize to other scenes. This report focuses on optimizing the shortcomings mentioned above of NeRF by scholars in the last three years and analyzes the solutions to the problems of NeRF from several perspectives.
Research Article
Open Access