Google’s amazing new photo AI brings light to darkness and much much more

>

Photographers may soon be able to effectively ‘see in the dark’ after Google Research added a new AI noise reduction tool to its MultiNeRF (opens in new tab) project.

The RawNeRF program can read images, using artificial intelligence to add higher levels of detail (and far fewer unsightly artifacts) to photos taken in darker conditions and low-light settings. According to the team behind the project, it works better than any other noise reduction tool out there. 

“When optimized over many noisy raw inputs, NeRF produces a scene representation so accurate that its rendered novel views outperform dedicated single and multi-image deep raw denoisers run on the same wide baseline input images,” the researchers explained (opens in new tab) in a Cornell University paper. 

What is NeRF?

NeRF is a view synthesizer – a tool capable of scanning thousands of photographs (opens in new tab) to reconstruct accurate 3D renders

According to Ben Mildenhall, one of the project researchers, NeRF works best with well-lit photographs and low noise levels. In other words, it’s built for day-time shooting. 

Low-light and night shoots proved problematic, hiding details in shadow or becoming noisier when upping the brightness in post. The issue Mildenhall and the team found was that denoising tools can somewhat reduce the noise, but at the cost of image quality.

With the advent of RawNeRF, artificial intelligence is set to quieten the noise without stripping away the detail – effectively letting shutterbugs ‘see in the dark’.

In a video demonstration (opens in new tab), NeRF in the Dark, – originally published in May 2022 and going largely unnoticed at the time – Mildenhall takes a cell phone lit only by candlelight. RawNeRF is “able to combine images taken from many different camera viewpoints to jointly denoise and reconstruct the scene,” the Google Researcher explains.  

Original (L) vs RawNeRF (R) (Image credit: Google Research)

Reconstructed images are rendered in a linear HDR color space, letting users further manipulate angles, exposures, tonemapping, and focus. In his video, Mildenhall notes how varying each of these together “creates an atmospheric effect that can bring attention to different regions of the scene.”

While still in the research phase and not an officially supported Google product (yet), RawNeRF offers a tantalizing glimpse of how AI could help creatives better reflect the world around them.

Related Post