VISION SYSTEMS IN PROCESS MANUFACTURING: Many predict the days of lengthy, rules-based programming routines are just about over
To a discrete manufacturer, process manufacturing is odd territory indeed. It’s a world in which textiles, pharmaceuticals, chemicals, plastics, and food and beverage are produced en masse. Job quantities are defined by how many liters or meters of something you make per batch, not how many parts are fabricated or assembled per shift. Operators work to recipes rather than bills of material, and raw materials take powder or liquid form instead of billets, sheets and rolls. These bulk materials are brought together and—depending on the end product—measured, mixed, and cooked, woven, pressed, or sorted, and finally packaged into smaller, more sellable units.
Again, it’s a different world.
There are some similarities, however.
Both types of manufacturing increasingly use vision systems either for monitoring processes to ensure quality control or, in conjunction with robotics and other material-handling technologies, for sorting and packaging purposes (sometimes both).
And while it doesn’t make sense to mount a camera inside a heated vat of polymeric goo during the plastic compounding process, it does make sense to install one on the business end of an extrusion machine to watch for defects.
Similar arguments exist for carpet making, beer brewing, steel rolling, and pill pressing—in each instance, vision systems are usually found downstream, where the bulk products are converted into discrete objects.
Consider the carpeting example just mentioned.
Mike Fussell, technology product manager at FLIR (for “forward-looking Infrared”) Systems, said this has long been an application for line-scan cameras. Here, a long row of single-pixel CMOS sensors sits above the object being scanned, capturing image data as the roll of carpet (or plastic film, sheet steel, textiles, etc.) passes beneath it. It works much like a fax machine, providing for the digital reconstruction of practically any flat, sheet-like object, albeit much more quickly than those office relics.
“Because the product being scanned is typically quite wide, line-scan cameras have historically provided a cost-effective means of detecting defects in these types of continuous flow items,” he said. “That has begun to change over recent years, though, as manufacturers attempt to increase throughput by making their rolled goods wider and wider. Customers are now finding they can achieve equivalent performance using several area-scan cameras in a series. These can grab complete frames containing millions of pixels all at once, are more flexible than line-scan systems, and because the sensors are smaller, the use of more compact and less expensive optics is possible, enabling equivalent performance for a significantly lower cost.”
The show me state
Laser light and photoelectric beam devices are another common technology for process manufacturers. But here again, these and other traditional sensors are gradually giving way to more flexible and capable CMOS-based cameras.
Rainer Schoenhaar, product manager for vision products for Balluff, suggested that an increasing number of machine-monitoring and quality-control functions are moving to intelligent vision systems.
“The algorithms are getting smarter and the costs are coming down,” he said. “The biggest challenge in these kinds of machine vision application remains the lighting.”
The first of these factors—the algorithms—is perhaps the biggest reason behind vision’s rapid growth.
Schoenhaar and others explained that, as with self-driving cars, the software used to analyze the millions of images collected by these high-speed, high-resolution cameras is reaching artificial-intelligence levels.
Though the industry is not yet there, most predict that the days of lengthy, rules-based programming routines will yield to deep learning technology, where the system is shown what a product should look like and left to figure things out on its own.
“Much like an experienced operator can look at a product and instantly determine whether it is good or bad, so can many vision systems,” he said. “Yes, there is currently some training involved, teaching the computer what patterns to look for, the distances and areas they should check, and what tolerances are acceptable, but even that step is becoming simpler. For this reason among others, machine vision is the clear path forward for many industries, process manufacturing among them.”
Light it up
That brings us to the lighting aspect. As Schoenhaar noted, most cameras require light to see.
Aside from low light levels, factors such as shadowing, glare and poor contrast can adversely affect system accuracy and repeatability.
These limitations, however, have begun to fade away as vision suppliers improve their wares.
“Vision has evolved significantly over the past five years or so,” said Adam Bainsky, senior technical marketing manager for machine vision technology at Keyence Corp. of America. “We’ve introduced several technologies that utilize different lighting techniques, creating novel ways to identify defects and make the overall inspection easier.”
One example of this is pattern projection.
Bainsky explained that by projecting light through a semi-transparent panel, a series of striped patterns can be made to appear on the surface of the object being inspected.
The camera sees these patterns, gathers dimensional information about them relative to their location on the object, and with some clever computer processing, is able to determine the height of certain features.
The result is 3D measurement from what would otherwise be a 2D image.
In another example, a camera is used to image an object as it passes through its field of view—a can of carrots, for instance, or box of cereal, either of which might be traveling at several hundred feet per minute.
This image is then sent to a computer for processing, which compares it against a master image. If the can is slightly dented or the box not sealed properly, the system recognizes this and alerts the controller that the object is defective.
Finding the bad apples
But what then?
As mentioned, most such production lines move products along at a rapid clip. How is the dented can of carrots segregated from its salable counterparts?
Wesley Garrett, international network sales manager for pick/pack/pal and warehousing at FANUC America, has an answer: He indicated that keeping track of such defective products and removing them at the appropriate time is a straightforward task for an automated system.
“Vision systems are becoming more and more common in the picking and packing areas side of process manufacturing,” he said. “You might have a raw product—a hunk of cheese, or a candy bar—coming down a conveyor belt, and you need to set it into a tray or other package. So we will utilize vision to find that object, determine its X and Y location, its rotational angle, and then relay that information to a downstream robot.
“Images are snapped at regular frequencies, often enough that we can catch all of the products passing underneath, and since there are a series of pulse encoders on the belt, we’re able to tell the robot where each one is and how to grab it.”
These same capabilities can also be applied in the case of the aforementioned can of carrots, a broken cookie or practically any object that falls outside the vision systems predefined tolerance band, shuttling it off to a scrap bin.
Vision-equipped robotics eliminate reliance on human inspectors, Garrett added, freeing them for more intelligent and less tedious tasks while simplifying what would otherwise be elaborate (and expensive) fixtures and conveying systems.
“Vision just makes everything simpler,” he said.
John Agapakis, director of traceability solutions at Omron Automation Americas, agreed.
He noted that vision-equipped packaging lines are also more flexible. A single vision system can be taught to inspect different types or sizes of packaged products on the same line. Changeover becomes a matter of pulling up a different inspection routine and possibly adjusting the lighting conditions, after which the system can get back to work. And capabilities built into modern smart cameras, such as multi-directional, multi-color lighting and autofocus optics using liquid lens technology, simplify the changeover process.
Lastly, they reduce process manufacturing risk.
“Think about pharmaceuticals, which on a manual line need to be 200 percent inspected for correctness and legibility of the printed lot code, expiration date or other information on a label,” Agapakis said. “The same is true for cosmetics, where vision is also used to check that the product color matches the packaging, or in the canning industry, which faces huge liability if the regular green beans are placed into a can that’s labeled ‘low sodium.’ Machine vision serves to eliminate this possibility and makes the packaging process more efficient besides.”
Fernando Callejon, Omron’s product manager for machine vision, said that customers in need of high-quality vision systems have many options. But he added one final bit of advice: Most systems today are CMOS-based—not the CCD technology with which many industry old-timers are familiar—and are capable of capturing very high-resolution images at high shutter speeds. However, those who don’t need to inspect fast-moving products can save some cost by going to a camera with a rolling shutter system.
This is typically the same type of camera you’ll find in a smartphone, he added. Here, the image is constructed one line at a time, as opposed to capturing it in one shot, as with its counterpart, a camera with a global shutter system. The result is a higher resolution imaging system—albeit limited to slow-moving or fixed objects—for roughly the same price.
“We have customers who incorporate both technologies into the same machine,” Callejon said. “They might use a single camera with a rolling shutter system for very high-resolution imaging in one area, and then place cameras with global shutters elsewhere. The beauty of this approach is that you can take advantage of both technologies, connect multiple cameras to the same controller, and mix and match their capabilities based on your specific needs. Vision today provides a great deal of flexibility, but is also becoming easier to use, more cost-effective, and simpler to implement.”
Vision-equipped robots will need taskmasters
With robots now doing everything from flipping hamburgers to sorting fruits and vegetables, you might be wondering when your job will be lost to automation. On a larger scale, what will happen to our economy when robots assume large numbers of these and other menial tasks? Will unemployment numbers skyrocket and countless people go hungry? Or will humanity enter a golden age where people have the free time to pursue more meaningful activities?
Wesley Garrett of FANUC America is leaning toward the latter. He suggested the jobs that robots are filling are those that no one else wants, forcing employers to automate wherever possible. “The reality is that much of the unskilled labor force just doesn’t exist anymore,” he said. “Very few young people want to perform laborious tasks like inserting widgets on an assembly line, or standing there pulling donuts out of a deep fryer all day long. The baby boomer generation grew up with these jobs, and while they might not have necessarily liked them, they did it because there was no alternative. Smart, vision-equipped robots have begun to change that paradigm.”
There’s a lot of good news in this scenario. Young people might not be interested in shoving parts into cardboard boxes for minimum wage, but an increasing number of them have no problem teaching robots how to do those tasks. So instead of scrambling to find a dozen workers to serve on a production line, employers can find one or two technical people to oversee a fleet of droids. What’s more, this next generation of human taskmasters will likely need nothing more than a high school degree, some mechanical aptitude and a willingness to learn. “Given that most of these low-skill jobs would otherwise go unfilled and the work end up in another country, advanced robotics makes a heck of a lot of sense,” Garrett said.