Ubicept at ICCP2023

Some takeaways and a new demo from the International Conference on Computational Photography

Over the past weekend, our team had the opportunity to attend (and sponsor!) ICCP2023 (the International Conference on Computational Photography) in Madison, Wisconsin. It was a fantastic experience, providing us with the chance to reconnect with old colleagues, forge new connections, and stay updated on the cutting-edge research in our field. Plus, while we’re a Boston-based company, many of our roots are in Madison—our CEO was a postdoctoral associate there and two of our faculty co-founders are at the university (go Badgers!).

What set this particular year apart and got us really excited was the increased focus on single-photon imaging. It's fascinating to witness the growing interest and recognition of its immense potential. Some relevant data points:

  • By our count, over a quarter of the papers and posters used single-photon sensors in their research. Considering the breadth of the conference, that’s a lot!

  • Two of the invited talks and one of the keynotes mentioned single-photon sensors prominently.

  • The Platinum Sponsor of the conference, Sony, highlighted the usage of single-photon sensors for 3D imaging in their industry consortium talk.

As a company deeply committed to advancing this technology, we were eager to explore how our hardware and software stack can further contribute to its numerous applications.

While we were mostly there to listen and learn, we couldn’t resist the opportunity to try a new demo! This time, however, we spent the weekend collaborating with Sacha Jungerman, a researcher in the WISION Lab at the University of Wisconsin-Madison (go Badgers!). His work on “Panoramas from Photons,” co-authored with Prof. Mohit Gupta and Prof. Atul Ingle, will appear at ICCV2023 (a premier computer vision conference) in October. So, it was a privilege to get a sneak preview of his algorithm running on data from our solution.

We don’t want to spoil the details of his work, but the concept is that panoramic photography typically involves moving a camera around and taking multiple photos and/or video. And, as we all know, moving a camera around can result in motion blur and artifacts from high dynamic range processing, which can significantly degrade the quality of the final result. This is especially true if the motion is fast—if you’ve ever tried to shoot a panoramic image with the telephoto lens on your smartphone, you’ve probably seen this message:

The method outlined in Sacha’s paper exploits the uniquely “raw” nature of single-photon output to generate high-quality results, even with extreme motion and dynamic range.

Let's dive into the setup we used for this demo. We mounted a long 70mm lens on our evaluation kit camera and quickly "scanned" our subject, which, in this case, was none other than the conference attendees posing for a group photo! To provide a sense of the speed, this is a clip from the input “video” played in real-time using simple averaging of the raw photon data to create a 30 fps video:

Several Ubicept team members are in this clip, but the motion blur and extreme dynamic range make it a challenge to recognize anyone. This is even more obvious in the still frames—we believe this is Felipe Gutierrez Barragan, one of our Senior Software Engineers, but even he couldn’t tell:

Now, below is the result of the “Panoramas from Photons” algorithm with some standard tone-mapping applied to compress the dynamic range. Since it’s an extremely long image—scrolling all the way from left to right feels like running the length of Camp Randall Stadium (go Badgers!)—we took the liberty of rendering it back into a video:

As you can see, everyone (including Felipe!) is recognizable and there’s plenty of detail in both the lightest and darkest regions of the scene.

You may be thinking: couldn’t this be done with a conventional digital SLR with copious exposure bracketing? In this particular case, yes, but there are plenty of “moving panorama” scenarios which would make more conventional approaches prohibitive. We actually showed one of these when we captured a long photo of a train in a previous blog post.

Admittedly, we put that together by running a more conventional algorithm on the high-speed video output. By operating directly on the photon data, the “Panoramas from Photons” would have achieved significantly better results.

We’re very much looking forward to reading the final paper, Sacha! Go Badgers!

Previous
Previous

Capturing Facial Expressions in Low Light

Next
Next

Tiny Motions with Tough Lighting