A simpler example is a simple pass / fail check of the product on the assembly line or before shipment. PCB detection is a common use case. When the production PCB moves from the automatic pickup and placement system to the next stage, it can quickly and easily compare the image of the correct filler board with the production PCB.
This is a valuable step in quality assurance and waste reduction. Human eyes and brain will not repeat hundreds or even thousands of times every day, but machine vision can do it.
As the resolution of the image capture system increases, the possibility of machine vision also increases because the details available for evaluation increase at a corresponding rate. Smaller and smaller subsets of visual information can be evaluated for the master template, which increases the burden on the system processor in data loss and makes decisions on subsequent steps quickly.
Simply take agricultural vegetable grading as an example. Vegetable grading refers to the simple size and qualified / not the best situation of product quality, and the product quality will change with different seasons. In the future, if we can save costs and ensure the quality of vegetables to the greatest extent, we need more optimized algorithms for quality grading, which is an almost impossible task for human eyes and brain. However, we can process a large amount of information through smart camera scheme customization, which requires multiple stages and cameras, machine lighting, increase of planting sites, etc.
In addition, one method is to use a wide range of processing capabilities, which can be used not only as a centralized processing unit with high bandwidth connection, but also as a distributed processing unit of smart camera. It can directly process data in real time in the camera, and only need to transmit the results of each product to the final mechanical grading system.
It can be used in conjunction with the simultaneous interpreting of smart heads with different heads of transducers. For example, its hyperspectral imaging head can carry out nondestructive testing on food quality and safety. In standard vision systems, food quality and safety are usually defined by external physical attributes such as texture and color.
Hyperspectral imaging gives the food industry the opportunity to incorporate new attributes into quality and safety assessments, such as chemical and biological attributes, to determine the amount of sugar, fat, moisture and bacteria in products. In hyperspectral imaging, a three-dimensional image cube that obtains spatial and spectral information from each pixel.
More spectral characteristics can better distinguish attributes and make more attributes recognized. The image cube includes the intensity of each pixel of all acquired light wavelengths (reflected or transmitted light), which causes each image cube to contain a lot of information. The amount of data represents an exponential increase in computational challenges to extract qualitative and quantitative results of product classification in real time.
It only needs to use the accelerated processing unit (APU) in the smart camera platform to combine the GPU and CPU on the same chip, so that the system can unload the dense pixel data processing in the visual application to the GPU without processing the high delay bus transaction between components.
This enables the CPU to provide other interrupts with lower delay, which helps to improve the real-time performance of the whole system and meet the growing processing requirements of modern vision systems. GPU is a large-scale parallel engine, which can simultaneously apply the same instructions in large data (pixels) sets; This is exactly what machine vision needs. By pairing the APU with an external independent GPU in the shape of the mobile PCI Express module (MXM), performance can be further improved, enabling it to add additional GPU processing resources when needed to support more intensive visual tasks.
As for software, the heterogeneous processing platform can be managed by the standard Linux kernel, and each new kernel version only needs moderate development support. X86 ecosystem support enables companies to take advantage of open source and third-party image processing libraries, such as opencv, MathWorks, MATLAB and Halcon. Debugging tools, delay analyzer and parser (perf, ftrace) are also widely available.
Machine vision is a good example of how extensible processing can play a role in embedded applications.
Source: langruizhike