Counting and Inspecting Small Objects

Efficient high-speed processing with the Ubicept solution

“cooking show host looking very frustrated because grains of rice are flying everywhere”

(via Microsoft Bing Image Creator)

Don’t you hate it when you’re trying to make a quick dinner and the recipe asks for exactly 8,250 undamaged grains of rice? Well, Ubicept can save you time in the kitchen so that you can spend more time with your loved ones!

While this is clearly a joke—we couldn’t resist the temptation to try some AI-generated stock images—the problem itself is a very real one. Counting and inspecting small, fast objects is an important machine vision task in many industries, such as manufacturing, logistics, agriculture, and surveillance.

We set out to test our solution with—you guessed it!—a bowl of rice:

The comparison camera for our test was a current-generation “Pro” smartphone set up to capture 1080p at 240 fps on its most capable sensor. The following video output was downsampled to 60 fps to show the motion at full speed:

240 fps may sound quite high, but the fast motion of the grains when poured from several feet above the bowl is apparent when we look at some individual frames:

You can see four grains in the left frame, but the motion makes it difficult to evaluate their shape. All bets are off in the center frame since the sheer volume of the falling grains makes them blur together. Even the lowering of the measuring cup isn’t perfectly sharp!

What about AI-based approaches that attempt to reduce or eliminate the blur? Recent advancements over the past few years have produced remarkable results, seemingly defying the computing principle of “garbage in, garbage out.” However, it’s important to distinguish between superficially appealing outcomes (which are great for creative purposes like generating funny images for blog posts) and those that are also accurate enough for critical machine vision applications. The Ubicept solution can achieve the latter by using precise photon data and sophisticated processing techniques.


On to the demos! If you’ve seen any of our previous high-speed demos (such as the ones for vibrating guitar strings or speeding subway trains), you probably already know that our solution can produce a high-quality slow-motion video. We’ll get to that in a moment, but it’s important to note that this process can involve a fair amount of computation and bandwidth. This can be an issue for some computer vision pipelines if they’re not optimized to handle video at, say, 4800 fps. Reducing it to 240 fps may not be desirable either, as shown by the smartphone video.

This is precisely why we’ve built the Ubicept solution to have programmable output modes. One example of this is our “event” or “neuromorphic” representation that only shows changes in pixel intensities over time. Here, green shows dark-to-light and purple shows light-to-dark:

As you can see, this allows the motion and shape of the grains to be captured at extremely high frame rates while severely limiting the amount of data generated.

The programmability of the Ubicept solution allows hybrid approaches as well! For instance, our system can be configured to output full-frame event data along with a narrow strip of high-speed video to show surface irregularities:

Finally, as promised earlier, here’s a video of our results that includes a full ten minutes of rice pouring into a bowl in super slow motion:

If this sounds appetizing and you’d like to know more, please reach out to us today about a consultation or evaluation kit. We’d love to help you cook up a solution to your most challenging visual perception problem!


Technical note: The dot patterns in the final video are from the LiDAR-assisted autofocus system in the smartphone which was recording simultaneously. They are visible here because we used a development camera which did not have an IR cut filter installed.

Previous
Previous

Real-time Object Recognition

Next
Next

High Speed Vibration Analysis