Deep vision inspection using AI enables multiple emerging markets | Avnet Silica

Display portlet menu

Deep vision inspection using AI enables multiple emerging markets | Avnet Silica

Display portlet menu

Deep vision inspection using AI enables multiple emerging markets

Michaël Uyttersprot, Market Segment Manager Artificial Intelligence and Vision
Close-up of automated machinery with robotic arms and circular glass components
Adding AI to production vision systems could extend their capability.

Machine vision is enabling many existing and emerging markets. Security, manufacturing and industrial automation all use machine vision. Adding artificial intelligence (AI) inferencing at the sensor provides many benefits.

Industrial automation combines robotics with vision systems. The vision systems in these applications are typically separate functions providing position data to the robot. Adding AI to production vision systems could extend their capability. The closer integration of robotics with intelligent vision systems will lead to new and more capable industrial automation.

Inspection is a separate and important part of the production process. Adding AI to machine vision-based inspection systems increases throughput and productivity, leading to higher profitability. The main demands on these systems are speed, accuracy and convenience in operation.

Optimized visual inspection

AI relies on inferencing. In a high-speed production environment, the AI model must run close to the image sensor to avoid latency. The model infers results from the data provided by the sensor. The accuracy of that inferencing will depend on how well the model was trained. Training a model requires a good selection of sample images. In an application where the object detected can vary greatly, such as animals, training can be compute-intensive and time-consuming.

For a production environment, particularly one where the objects are largely regular, the training requirements are less demanding. These kinds of inspection systems can be useful for detecting defects in a production line of similar objects.

With fewer parameters, the model is also relatively small, depending on the size of the dataset used for training. The more parameters in the model, the more data needed for training. More training data needs more powerful processors, which is why most AI training is carried out on large servers located in the cloud.

In an automated production environment, the product may change frequently. Training an AI vision system on all possible products would be expensive. Deploying a new model when the manufacturing line changes could become inconvenient. In this scenario, the OEM needs a solution that can be repurposed quickly, with limited time needed to train the system. Ideally, training would also be carried out using unlabeled data, referred to as unsupervised training.

These requirements create significant challenges for developers. The training process for AI systems is more compute-intensive than inferencing. The ideal system would be small and low power, which implies using processors with limited compute power. While running AI models on low-power compute platforms is possible, training at the edge is still unusual due to the system-level requirements.

Using the same hardware for both training and inferencing is a challenge. Fortunately, defect detection is simpler than object recognition. In this scenario, the nature of the object is not important. It is safe to assume that all the objects passing in front of the camera are the same, so there is no requirement to identify the object.

The requirement therefore is to identify defects or abnormalities. This would include any feature that does not comply with the known good samples used for training. From a compute point of view, this reduces the number of parameters needed to train the model.

With limited parameters, training is more easily achieved, even on a compute platform with much lower performance than a cloud server. To prove the concept, Avnet Silica partnered with Deep Vision Consulting to develop a deep vision inspection system that trains AI models at the edge.

Unlike other AI-enabled vision systems, the deep vision inspection (DVI) system can be trained at the edge using a small number of samples from the production line. The training is restricted to the specific product being inspected. This limits the number of samples needed and the time taken to train the system.

 

Running models on low-power platforms

The system developed by Avnet Silica and Deep Vision Consulting currently runs on platforms based on either an NXP i.MX 8M Plus or NXP i.MX9 application processor. These processors are ideal for edge-based AI and machine learning applications, combining high levels of integration with powerful Arm-based processors subsystems, in a low-power solution.

Other resources in the application processor include NXP’s neural processing unit (NPU), which is fully exploited in this application to run AI training and inferencing. The image sensor and lens used in the DVI solution operate in the visible part of the spectrum, but the design could also use infrared, ultraviolet or X-ray sensors. In this case, good lighting and a defined field of view are the only system-level requirements.

The system provides an anomaly score for samples inspected. This indicates the degree of defectiveness. The user defines the threshold between good and bad or pass and fail, setting the sensitivity of the system. Images can also be annotated to show the defect.

The solution is Linux-based, running a proprietary C++ library with a Python API. The software for the solution is available with an unlimited and perpetual license. For more information or to arrange a demonstration, visit the dedicated Avnet Silica webpage.

See Avnet Silica's DVI Solution

About Author

Michaël Uyttersprot, Market Segment Manager Artificial Intelligence and Vision
Michaël Uyttersprot

Michaël Uyttersprot is Market Segment Manager at Avnet Silica, which is continuing to develop and ad...

Deep vision inspection using AI enables multiple emerging markets | Avnet Silica

Display portlet menu

Deep vision inspection using AI enables multiple emerging markets | Avnet Silica

Display portlet menu
Related Articles
laptop with graphic overlay
How is AI changing the electronics industry?
By Philip Ling   -   May 22, 2024
Artificial intelligence has the potential to influence every part of any company. As a trusted partner and leader in technology, Avnet has a responsibility to its customers and suppliers to consider the impact of AI from all angles.
tensors
Why AI and embedded design share the same DNA
By Philip Ling   -   May 22, 2024
Intelligence comes in many forms. More of us are interacting with devices that appear to understand us. What and how they understand depends on the technology inside. How are embedded engineers implementing intelligence?

Deep vision inspection using AI enables multiple emerging markets | Avnet Silica

Display portlet menu
Related Events
Shape the future of Smart Industry
Date: September 11, 2024
Location: online