Improve Industrial Automation with Embedded Vision Systems
September 19, 2018

Leaping beyond Six Sigma in Industrial Automation with Machine Vision Systems

Harish S. (Manager, Technical Sales) & Puneet Gupta (Senior Manager, System Solutions)

Over the last few years, there has been a rapid rise in the use of machine vision systems for factory automation to enable reliable but fast decisions on process flow with little or no human intervention. This has enabled manufacturing plants and businesses to cut down human resource costs, improve process efficiency and operate at maximum potential.

With the increasing use of machine vision, the nature of quality assurance has leapfrogged considerably, going beyond six sigma thinking. Intelligent pattern matching, object detection and classification are used in automated defect detection (torn labels, broken caps, damaged packaging) and intelligent inventory management (object counting, binning sorting).

Driven by machine vision, industrial inspections have become more comprehensive, enabling a significantly higher degree of automation.

 

However, effective automation of industrial processes depends on two key considerations:

1. Speed: Decisions and inferences are expected in short time frames to allow processing of high volumes of produced commodities.

2. Accuracy: Decisions and inferences need to have very high precision and very low rate of false positives to reduce operational errors or wastage.

Typically, machine vision systems used in industrial automation expect 100% success on detecting ‘critical’ defects. Acceptable missed defect rates are <1-2% for ‘major’ defects and <2-3% for ‘minor’ defects. Similarly, false positives for detected defects should be <2-3%. However, embedded platforms are often constrained by available computational resources and memory. Therefore, embedded vision system designers and developers often trade off speed/memory for accuracy or vice versa. For quality assurance  in factory automation applications, this is quite undesirable.

However, you can avoid sacrificing one for the other by making the right choices while developing the solution for a given application. Here is how you can do that.

Balancing Speed and Accuracy in Machine Vision Systems

 STEP 1: Ensure the right camera set-up and illumination

Lighting and video or image capturing set-up greatly influences the efficiency and performance of a machine vision system. Poor or ineffective lighting can affect accuracy of vision or learning algorithm output and may necessitate additional pre-processing steps such as brightness/contract enhancement on the captured video or images to mitigate it. This in turn can increase the processing complexity and consequently delay decisions and output from the vision system. Similarly, any need for compensation in image/video processing for bad camera angles could mean additional processing (image rotation, cropping, de-warping, etc.). This introduces complexity and delays in the system, and might also affect the decision accuracy.

STEP 2: Choose the right algorithm for the job: Keep it simple

Different vision processing techniques of varying complexities are used to achieve a desired output for tasks such as pattern matching or object detection/classification. Based on the product’s end use case, set-up and performance expectations, you can choose the right vision algorithm to meet both the speed and accuracy needs. For instance, to carry out shape/size/color based matching of objects for industrial sorting/binning, you can process images of small resolutions (say 640×480) rather than full-HD images, while not compromising on the accuracy of matching. Similarly, for object counting, you can resize or down scale input images before processing without compromising its accuracy.

STEP 3: Select the right platform to run the chosen algorithm

Identify and leverage a suitable platform based on the required processing capacity (video resolution, channel density and selected vision algorithm). Wherever possible, implement the algorithm on a processor with GPUs/DSPs acceleration instead of CPU-only processing. Many vision algorithms are best realized in GPU/DSPs, since it provides huge advantages of parallelized pixel processing. GPUs are ideal for functions such as deep learning inferencing and image processing, including object identification, pattern matching, and gesture recognition. Needless to say, other considerations of form factor, power consumption and operating temperature range might also influence this decision.

 STEP 4: Identify the right parameters for operation

Once the camera setup, lighting, vision algorithm and processor are decided, it is time to choose the right set of parameters for the final realization. This includes decisions on operating image resolution, frame rate, sensitivity, thresholds and other configurations that determine the trade-offs between accuracy and speed of operation.

STEP 5: Gather feedback and improve STEP 1 to STEP 4 continuously

If necessary, iterate through the above Steps 1 to 4 during initial field deployment and testing to fully qualify and tune the vision solution and deliver the expected performance.

 

Ittiam has in-depth expertise in realizing embedded vision systems and can help customers realize the best solutions for their factory automation products. Explore how Ittiam’s machine vision, learning and analytics SDK (adroitVista) can deliver the best speed and accuracy for your application.

Contact Ittiam marketing (ids-mkt@ittiam.com) for more information.