In today’s ultra-connected and camera-covered smart factories, the amount of visual data generated and transferred taxes even the most powerful computer and network systems. Machine vision systems support a wide range of functions to improve performance, efficiency, quality and safety. And they have typically relied on traditional, frame-based techniques to collect visual data, which are now proving to be grossly inefficient given the volumes of useless data they produce (up to 90 percent and more in some applications), requiring more computing horsepower, storage, bandwidth and energy.
In a frame-based video an entire image (i.e. the light intensity at each pixel) is recorded at a pre-assigned interval, known as the frame rate. While this works in representing the “real world” when displayed on a screen, recording the entire image at every time increment oversamples all the parts of the image that have not changed.
What’s important in industrial use cases of machine vision is only what changes in a scene. We call that an event. By taking a cue from human biology, we can replicate the efficiency of how our eyes work with a new form of efficient vision capture called event-based vision.
Consider how the brain and eyes deal with a massive amount of visual data in every waking moment. Evolution has given us shortcuts to cope with this data deluge. For example, the photoreceptors in our eyes only tell the brain when they detect change in some feature of the visual scene, such as its contrast or luminance. It is far more important to be able to concentrate on movement within a scene than to take repeated, indiscriminate inventories of its every detail.
Event-based vision senses visual activity by processing only information that changes in a scene. A sensor’s pixels, each of which is independent, detect changes by sensing changes in light. If the incident light isn’t changing, the pixel stays silent. If the scene is changing, the affected pixels report the change. If many objects pass, all the affected pixels report a sequence of changes.
This lets machine vision systems focus on the information that matters, greatly reducing the amount of data to be analyzed.
Imago, a supplier of machine vision systems for automation applications, has seen the benefits, including lower data bandwidth, decision latency, storage and power consumption, as well as better visibility in tough lighting conditions.
Imago’s experience has shown that event-based vision sensing can improve overall factory throughput by bringing ultra high-speed, affordable vision to manufacturing. Its systems enable more effective quality control and maintenance to ensure efficient operations.
This includes manufacturing process control and preventative maintenance by using machine vision to analyze equipment process deviations through kinetic or vibration monitoring. Imago’s systems can improve predictive maintenance by measuring and monitoring equipment vibrations from 1Hz to 10kHz remotely, continuously and in real time under normal lighting conditions. The mechanical state, integrity and robustness of a piece of mechanical equipment can be inferred from the vibration frequencies. Variation in vibration of production equipment is a primary indicator of deviation from its normal production set point. This information lets the maintenance team observe and understand any process deviation long before machines malfunction.
Imago is now deploying event-based vision in machines that enable high performance vision in robotics, assembly, object tracking, spatter monitoring, calibration and security and safety monitoring.
Connect With Us