| Summary: | In recent years, neural radiance fields have been widely used in the field of computer graphics due to their excellent reconstruction quality. However, the shooting process in the wild environment is often affected by various internal and external factors, resulting in blurry images. To address the problem of defocus blur in the real world leading to a decrease in the reconstruction quality of neural radiance fields, this paper proposes a new deblurred radiance field and designs a rigid blur kernel based on the depth features of the image frame to model the rigid transformation of light and the weights of the coarse components of color. For the problem of similar two-dimensional coordinates restricting the model to distinguish scene details in the non-focal plane background, a fine sampling weight using multiscale depth feature fusion is further proposed, and a staged optimization strategy is designed. Experimental results show that compared with the state-of-the-art methods, the method proposed in this paper can better recover scene details and generate high-quality images for defocused blur scenes.
|