Skip to content

Diversity that can make your head spin

Bruce Morey
By Bruce Morey Senior Technical Editor, SME Media

Smart factory systems benefit from improved technologies, as well as proven applications

FANUC-SR-3ia.jpg
A FANUC SR-3iA equipped with iRVision demonstrates visual circular tracking. Enabled by advanced machine vision, circular tracking allows for a compact system layout and eliminates unnecessary part handling, which minimizes potential product damage.

Electronic cameras, the basis for machine vision, are becoming ubiquitous. From smartphones to complex factory automation applications, electronic eyes are augmenting, if not replacing, human eyes in many ways.

Diversity and choice are the hallmarks of today’s industrial machine vision market. Available are everything from low-resolution, inexpensive cameras to high-resolution, multispectral cameras with built-in processing capability.

In fact, too much diversity might be the problem facing a manufacturing engineer today. It is not a question of using machine vision, but, exactly how.

Rick Roszkowski of Cognex, who has decades of experience in the machine vision field, expressed that sentiment: “I have seen machine vision come a long way,” he said. “From simple pass/fail applications in its early days to our very diverse line of products today that match an equally diverse set of applications. We are seeing a more inquisitive nature from people who are not using machine vision and want to know where it can be used.”

The industries where machine vision is used are equally diverse, including automotive, aerospace, computers, consumer products, and medical device manufacturing.

While it is easy to see that there is a huge range of applications, from simple 2D bar code readers helping track inventory to embedded machine vision in full powertrain manufacturing lines, the real story is the value today’s machine vision products provide.

Industrial cameras are getting better, faster, and—most importantly—cheaper, Roszkowski said. He attributes this to the basic technology for electronic imaging transitioning from charge-coupled-devices, or CCD arrays, to complementary metal–oxide–semiconductor, or CMOS.

CMOS cameras produce much less noise and consume less power. Cameras available today include simple barcode readers, 2D vision systems and complex 3D vision systems. There are 2D cameras with varying levels of color discrimination, including multispectral systems that see beyond the human range, at increasingly affordable prices.

“But it is not just the camera,” Roskzowski said. There is a continuum of smart sensors that contain an image sensor and a processing unit, as well as features like autofocus and sophisticated integrated lighting systems. “Lighting was always considered a separate technical component of the integration. Now you can get versatile and usable integrated lighting with the cameras,” he said.

Such smart imagers also include an integrated input/output connection to integrate the sensor into a factory automation system. Useful machine vision does not simply find an answer but provides a way to act on that answer. Cognex also offers a range of machine vision software.

Guidance and inspection

To help make sense of the playing field, Cognex thinks of the major applications for machine vision in four broad categories: guidance, inspection, gaging and identification. Robotic guided vision is one major application. Inspection includes determining the correct location, surface quality and completeness of assemblies. Finally, gaging is using machine vision to determine if a part meets its stated GD&T with metrology level accuracy.

FANUC is one such robotics supplier that has embraced vision guidance.

The use of the technology has grown in part through increasingly efficient integration of machine vision with integrated robotic controllers. “We released a brand-new controller at the end of last year, featuring some prominent functions for integrated robotic vision,” said David Dechow, staff engineer for intelligent robotics/machine vision at FANUC America.

“At FANUC, all of our vision systems are embedded right in the controller.” He also notes, and welcomes, that the machine vision and camera suppliers have prominently supported guiding robotics, especially with cheaper vision systems.

“Robots without vision are inflexible. They require precise knowledge of the position they need to go to to form a task—pick, paint, probe,” he said.

A good example is the flexibility in picking parts off a conveyor, Dechow said. There was always a tradeoff that engineers had to consider: Either use precision fixturing or the more flexible, but often more expensive, vision guided robotics.

Epson is another robot supplier whose focus in using vision systems is to primarily guide robots, Rick Brookshire, group product manager for robotics at Epson America, said. “However, its uses are growing, and we are now doing more inspection applications in addition to motion guidance.”

From checking completeness of assemblies to quality inspections, some users are expanding the camera systems attached to their robot systems.

This is both important and exciting because not only are companies using a single machine vision system for multiple purposes—guiding and gaging—but also embedding gaging in the manufacturing process. This means they are gaging and inspecting at each step of the process where a vision-guided robot is present.

This contrasts with the “old-fashioned way” of checking the product after it is made, Brookshire noted. “This enables real-time inspection instead of after-the-fact inspection,” he said.

Statistical data can be stored, and if something were to happen in the field with those products, they are more traceable to a root cause in the process itself, he added.

Brookshire also agreed that the growing range of cameras is expanding the range of applications as well.

“Most of our customers today are using the typical 2D, grayscale camera, but many are now starting to take advantage of color cameras,” checking for the color of wires in assemblies or caps of test tubes in laboratory automation applications, he said.

Integration and applications

Another trend affecting how vision guided robotics is adapting is the nature of manufacturing itself, Brookshire said.

Flexible manufacturing is becoming the norm as consumers demand choice and variety. “Lot sizes are going down, product changeover is happening more frequently and manufacturers want multiple parts and assemblies on the same line, requiring flexible feeding on conveyor belts,” he said.

In response, Epson is teaming with another company to provide a new, integrated flexible feeding solution called the Epson Intelliflex. It employs vision systems to find the presence and orientation of parts, picking them from the feeder without fixturing or requiring them to be oriented in any way. “Customers were struggling with integrating the flexible feeder, the vision system and the robot, so we developed that for them,” he said.

Another interesting new application is using a 3D vision system coupled with the Epson WorkSense Dual Armed robot that interacts with a part by referencing a 3D CAD model of the part.

“The robot will use its 3D vision system to match what it sees, so that it can determine orientation of the part,” he said. Thus, it can both determine the best way to place and pick the part, but also gage any GD&T deviations within the limit of the sensors capabilities. He expects them to be available on all other robots, including SCARA and 6-axis versions, by becoming standard with Epson’s RC 700A controller.

Andrew Zosel, senior VP of engineering & commercial operations at Omron Microscan Systems, views using machine vision in two more basic ways: First, it can perform tasks that a human simply cannot do. Second, it can handle mundane and tedious tasks that humans cannot sustain for a long time. “Machine vision today is used to inspect extremely small and fine features, such as in semiconductors or flat panel displays that a human will never be able to do,” he said.

Another is inspecting product for quality in production lines that might run at 1,200 units per minute, like in a bottling plant. One need not think hard to find tedious tasks humans cannot do for long, like inspecting painted parts for blemishes.

Zosel agrees that advances such as 3D sensors, color and multispectral, as well as high-speed cameras, have opened new applications.

Processing and I/O have also kept pace.

What has not kept pace is the availability of machine vision experts who can help manufacturers put these pieces of the puzzle together. “So, the big drive for us is to make products more integrated into environments, to work with other pieces of equipment,” he said. “To make it more interactive and easy to use.”

A good example of this is embedding machine vision technology directly into machines that may have used either a simpler sensor or none. Consider Omron’s new HAWK MV-4000, a CMOS camera line that ranges in size from ¼” to 1” with built-in dual core processors and 2GB of on-board RAM and 32GB of storage. It also provides a gigabit ethernet connection for integration into a system, among other features.

Embedded machine vision has enabled characterizing geometric measurements of individual parts. Coupled with the ability to track each part using bar code readers, smart software can now tolerance match individual parts against their GD&T and choose those parts that will create the best assemblies.

“Think of matching a piston with a cylinder, just as an example, or the left-hand and right-hand side of an electronics case,” Zosel said. “By doing that, you can go well beyond the tolerances would be of the machine’s capability by creating extremely precise assemblies, by leveraging the power and capability machine vision.”

At Omron, development of plug-and-play devices in machine vision will continue to expand. “With plug-and-play becoming prevalent in other technologies needed for automation pieces, you do not have to be an expert in any one area,” he said. “This allows you to think only of your processes. Say you’re working for an automotive company, you can be an expert in automotive bodies and not in automation because [we are making it] easy to use the automation today, relatively speaking.”

The role of AI

A key trend that is making it possible to have more plug-and-play and smart devices is in the growing use of artificial intelligence (AI) techniques.

FANUC’s Dechow points out that AI is not particularly useful for robotic guidance in the traditional sense: determining how a robot can move from one point to the next.

“AI is not good at discrete analysis, which is the heart of robotic guidance,” he said. “On the other hand, ‘AI learning’ is exceptionally good at categorization and differentiation.”

This is especially powerful when used with the imagery captured from machine vision.

“Machine vision just within the last two to four years has really embraced AI techniques,” he said. He attributes its growing use to the vastly improved power of today’s computers.

Is this one of those inflection points that often occur in technology development?

“AI has a place potentially to play in this area of making things easy to use in a plug-and-play environment,” Omron’s Zosel said. “AI can help you with an assisted set-up. It would do this by encapsulating the 20 to 30 years of application engineering experience that may guide that user to help set up a product.”

AI is important to Cognex, Roszkowski said. “We sincerely believe AI is our future and the future of machine vision. We are investing very heavily.”

Cognex’s latest release of its image processing software, VisionPro ViDi, uses deep learning for defect detection, texture and material classification, assembly verification and OCR on distorted parts.

  • View All Articles
  • Connect With Us
    TwitterFacebookLinkedInYouTube

Related Articles

Always Stay Informed

Receive the latest manufacturing news and technical information by subscribing to our monthly and quarterly magazines, weekly and monthly eNewsletters, and podcast channel.