LSE-NeRF: Learning Sensor Modeling Errors for Deblured Neural Radiance Fields with RGB-Event Stereo

1University of British Columbia 2York University


Novel view reconstructions for (left) our method and (right) BAD NeRF [25] .


Abstract

We present a method for reconstructing a clear Neural Radiance Field (NeRF) even with fast camera motions. To address blur artifacts, we leverage both (blurry) RGB images and event camera data captured in a binocular configuration. Importantly, when reconstructing our clear NeRF, we consider the camera modeling imperfections that arise from the simple pinhole camera model as learned embeddings for each camera measurement, and further learn a mapper that connects event camera measurements with RGB data. As no previous dataset exists for our binocular setting, we introduce an event camera dataset with captures from a 3D-printed stereo configuration between RGB and event cameras. Empirically, we evaluate on our introduced dataset and EVIMOv2 and show that our method leads to improved reconstructions. We are committed to making our code and dataset public.


More Results

'Bag' scene (Outdoors)
BADNeRF BADNeRF + Our Embeddings
E2NeRF Our Method
'Dragon Max' Scene (Indoors)
BADNeRF BADNeRF + Our Embeddings
E2NeRF Our Method

Collected Dataset

We visualize the first 50 and last 50 frames for some scenes in our collected dataset. The left is the blurry RGB and the right is the event camera data. The green dots are the triangulated 3D points.

'Courtyard' scene (Outdoors)
'Bicycle' scene (Outdoors)
'Grad Lounge' scene (Indoors)
'Teddy Grass' scene ('Indoors')