At Ubicept, we often talk about the “impossible triangle”—low light, fast motion, and high dynamic range—and how our technology enables perception even when all three are present. That said, it’s been a while since we’ve highlighted our HDR capabilities, so we decided to take a spin around town with our new color setup to show them off.
Before we dive in, let’s take a moment to talk about why high dynamic range matters for perception. Our world is full of extreme lighting contrasts. On sunny days, reflections from shiny surfaces can blind both humans and machines. At night, brilliant headlights and streetlamps create intense pools of light that leave surrounding areas in deep shadow. If a perception system can’t resolve detail across both the bright and the dark, it risks missing critical information. That’s why image sensors designed for applications like advanced driver assistance systems (ADAS) often emphasize their ability to handle these challenging scenarios.
For this demo, we rigged up two systems side by side:
The development kit camera was mounted outside the vehicle to capture an unobstructed view. Unfortunately, the dash camera had to remain inside due to its physical design, making it more susceptible to glare from the windshield. So, while this isn’t a perfectly fair or scientific comparison, the dramatic differences you’re about to see should still offer meaningful insight into the relative performance of the two systems in real-world scenarios.
Before you press play:
We hope the comparison video speaks for itself, but we wanted to highlight a few key moments to observe if you choose to review the footage again.
First, even though the dash camera runs in HDR mode, there are plenty of situations where its dynamic range just isn’t enough. Take this frame at 3:39:
The outlined area is actually well-lit by the surrounding environment, but the dash camera sacrifices shadow detail to avoid overexposing the bright building. As a consequence, the trees disappear into the noise floor. In contrast, our system preserves both highlights and shadows, revealing the entire scene clearly.
We also noticed some HDR-specific artifacts in the dash camera footage. In the frame at 0:27 below, the outlined region shows a sharp window, while the bright green container (moving at the same speed relative to the car) is blurred beyond recognition:
This is notable because, under normal conditions, motion blur reflects how much something is moving. With conventional HDR, however, that relationship becomes more complex due to how these systems operate. They blend short exposures for bright regions with longer ones for darker areas, causing motion blur to also vary by brightness. The result is frames that are harder to interpret.
These techniques can also introduce artifacts, as shown in this frame at 3:03:
We can’t say for sure what’s happening here, since we don’t have details about the dash camera’s HDR implementation, but suffice it to say that falsely repeated objects can be confusing for downstream perception systems. The more important point, at least for this demo, is that the SPAD camera with Ubicept processing is able to deliver consistent performance across all the situations we encountered.
Please note that the still images above were mapped down to SDR for web display, so some of the shadows and highlights may appear clipped. The video itself should show the full range, so we encourage you to view it on an HDR-capable display.
You might be thinking, “Wow, SPADs are amazing!” And they are, but they’re not enough on their own to produce results like this. We addressed this directly in a previous blog post, as well as on our Technology and Passive Vision pages. What we’re showing here isn’t the result of a special “HDR SPAD” or a dedicated HDR algorithm. It’s all part of the same core pipeline. Put simply, HDR is just one of many challenges our system is built to handle.
With that said, achieving the best results isn’t just about the sensor and processing. As we built this demo, we came to appreciate how important it is for all parts of the system to work together. In early tests using standard machine vision lenses, we found that glare significantly reduced contrast. That led us to the Sunex DSL428—we were admittedly skeptical at first of its “HDR-optimized” marketing, but it turns out the designation was well-earned!
We also ran into some practical challenges, like condensation forming on the optical components as the night cooled (note to self: bring some microfiber cloths next time). That’s something we’ll address in future demos, but the key takeaway is that the sensor and processing weren’t the limiting factors. Either way, we’re looking forward to showing even better results here with continued refinements to the optics and housing. Of course, if you want to see how our technology performs on your most demanding perception tasks, we’d love to hear from you!