
If you have great lighting, a good photographer can take decent pictures with even the crappiest camera imaginable. In low light, however, all bets are off. Of course, some cameras can shoot haunting video lit only by the moonlight, but for photos—and especially photos taken with a smartphone—digital noise continues to be a scourge. We may be getting close to what is possible with the hardware; heat and physics work against us even better camera sensors. But then Google Research came along and launched an open source project it calls MultiNerfand I feel like we’re on the precipice of everything changing.
I could write millions of words about how great this is, but I can do better; here’s a 1 minute 51 second video that at 30fps and ‘a picture says a thousand words’ is magic worth at least 1.5 million words:
The algorithms work on raw image data and add AI magic to figure out what the footage “should” have looked like without the distinct video noise generated by the image sensors.
This is currently more of a research than a commercially available product, but as a photography and AI geek I’m extremely excited about these developments; the lines between photography and computer graphics are blurring, and I’m here for it. Computer-aided photography is already present in all modern smartphones to some extent, and it’s only a matter of time before algorithms like this are fully integrated as well.