thumbnail group

Connect With Us:

ME Channels / Quality

Vision Systems, Warts and All


Vision systems do many things well, but be sure you understand their limitations

By Thomas R. Kurfess
Professor and BMW Chair of Manufacturing
Department of Manufacturing Engineering
Clemson University
Clemson, SC


In the past, vision systems were used primarily for identifying surface defects, for part verification, and for determining gross part orientation. Many vision systems have been used to identify surface defects such as blemishes on machined or highly finished surfaces. And they are also used to make sure that a part is clean. For example, vision systems are used to check precision surfaces on bearing races before assembly. Another traditional use of vision systems is to identify parts and their orientations for flexible assembly systems. Vision systems are used to reduce the need for specialized fixtures that hold the part in a precise location. Of course, the system is also used in this situation to ensure that the right part is in place, so some gross metrology occurs on these systems.

More recently, vision systems are being used for dimensional metrology. In this instance, basically three different approaches are used. They are internal camera calibration, external camera calibration, and photogrammetry. With the exception of photogrammetry, most vision systems are used for 2-D metrology.

Internal camera calibration involves using a calibrated camera in which the size of each pixel is known. In this instance, each pixel can be considered a division on a scale used to measure an object. Nominally, the camera is calibrated by taking a picture of an object of known size, and then counting pixels. If you take a picture of a 100-mm-diam circle and it spans 10,000 pixels, each pixel would define 10 µm.

Because many issues affect the calibration, including distance to target, this is a difficult way to make measurements with vision systems. If you are using a camera and taking a picture of an object, as you approach, it gets bigger in the viewfinder. So how does one calibrate to an object size? You need to know the distance to the target object if you use internal calibration. 

In many instances, calibration must be done when the system is used. This is due to the fact that there are many variables that result in an image size on the actual CCD array. For external calibration, often one will put objects of known size into a picture to provide image calibration points. These objects could be precision spheres, or points if you are doing a very precise measurement. They could also be less precise objects, such as scales or meter sticks.

Photogrammetry measures 2-D or 3-D geometries by using multiple images of the same object. Typically, fiducial marks are placed on the target. These marks are used to relate multiple images to one another to reconstruct overall object geometry. A pair of cameras can also be used to generate a stereometric data set. In this instance, precision cameras are rigidly mounted on a bar with a known distance between the two cameras. Pairs of stereo pictures are taken, generating 3-D information about the object. In general, this approach can yield very good results, but it's important to remember that a significant number of pictures must be taken and large data sets must be processed.

Good picture quality is necessary for all approaches that use vision systems for metrology. Edges must be clear and crisp, not fuzzy. This process can be likened to measuring a part with a gage or micrometer, where part stiffness becomes a factor. If you measure a hardened steel pin, then you're all set. If you measure a balloon, it deflects as you touch it, and its edge is not well defined. Of course a vision system is great for measuring compliant parts, as it's a noncontacting system.

In addition to sharp focus, distortion in your image must be minimized, so the system must provide wellrefined pixels and low lens distortion. Nominally, vision systems that are used for metrology have superior CCD arrays and lenses. It's entirely possible, of course, to calibrate some array and lens nonconformities out of final measurement via software.

If the camera tends to elongate images so that a circle looks like an ellipse, you can take a picture of a precision circle, which will result in an image that shows an ellipse. Via a transformation (these are very simple transformations), that image can be mathematically corrected to transform the ellipse back to a circle. The same transformation can be used on subsequent images to correct errors produced by lens nonconformities.

In principle this approach works well. But there are all sorts of issues that can arise if the image has hardware-induced errors. Users are always better off employing the best-available hardware and minimizing the calibration done in software. Strive to come as close to technical requirements as economically possible by obtaining the best possible hardware, and then correct the system using software. Software correction is much cheaper than hardware upgrades.

If you want to use a cheap camera for metrology, you're asking for it. On the other hand, if you want to use it for detecting flaws or extra/missing markings on a product, an inexpensive camera isn't a bad option.

 Where very high-precision camera systems are used, calibration is done at the factory, during installation, or using calibration artifacts. Systems such as the Zeiss F25 (Carl Zeiss Inc. North America; Minneapolis) use optical systems to make 2-D measurements on quite small components. The lens on the F25 is a fixed-type unit, so it's not changed to vary magnification.

Another firm that makes measurements using microscopes with integrated vision systems is View Engineering Inc. (Simi Valley, CA). Basically, they take calibrated pictures with their microscope, and then make measurements. In many of these systems, however, the objective power (lens magnification) can change. Thus, calibration is also related to lens power. If you change lenses, you must recalibrate. One advantage of the microscopic systems is that the user has a feel for distance to target because you must focus-in on an object, which permits a better job of calibration.

One really interesting application of vision systems is white light interferometry. These systems are produced by companies like Zygo Corp. (Middlefield, CT) and Veeco Corp. (Woodbury, NY). To use this technique, you place interference patterns on the object measured, and use the vision system to process the interference fringes/patterns. So the vision system measures the patterns that give high Z (normal) resolution on the part. Typical resolutions for interferometric measurement normal to the surface being measured are 1 nm or better. The X-Y (lateral) resolution is limited by the density of the pixels on the CCD array as well as any stitching algorithm, and is typically in the 1 µm range.

To use any vision system, you must locate the object you want to measure or inspect in the camera's field of view. If you cannot fit the object in the field of view, there are three basic options. The first is to back up, and the second is to use a lens that has a wider field of view. Although wide angle lenses may generate distortion around the edges of the image, software can correct this distortion.

In essence, both of these approaches make the object appear smaller to the CCD array. You lose image resolution by moving back or using a wider-angle lens. This is the tradeoff. It's the same for metrology systems that employ a microscope. If you want to look at very small details on an object, use a high-power objective with substantial magnification. Otherwise, use a lower-power objective that provides less resolution.

The alternative to changing distance (or lens power) is to take several photographs and stitch them together. This can be done in two ways: the first and most popular is to move the target to known positions, and capture a series of images of the object. These images can then be stitched together. Another approach is to stitch the images using features captured on the object in multiple images (much like creating a panoramic view with your digital camera).

The second approach is not as accurate as the first, but microscope systems make use of it regularly. They servo the part beneath the lens, stop, and then snap a picture. The microscope's servo stage is well calibrated, or may even have precision feedback on its encoders (linear or rotary) that allows the system to know where each of the pictures was taken. In the end, the computer overlays the pictures into one large data set.

Stitching always reduces measurement accuracy. The system is already limited by the accuracy of each individual camera image (the accuracy of one single shot); stitching adds the errors caused by stage motion and stitching algorithms. Often system capabilities are quoted in a single-shot mode as well as in a stitching mode. When you set out to determine a machine's capabilities, be aware of system- capability specifications, and understand which modes are being used.

On the plus side, vision systems are fast. They generate a significant amount of data and are easy to use, and many vision systems today can plug into a USB port. Typically they don't have the accuracy of a CMM, and are really intended for 2-D measurement. Also, because they are noncontacting, they may become confused by dirt on the object measured. Basically, they measure dirt as if it were part of the object. Contacting devices tend to push dirt out of the way.

Another problem with vision systems is that they may miss parts of the workpiece. This happens if too little or too much light is present on sections of the part. If too little light is present, a data hole can be generated, and no information is available. If there is too much light, the camera sensor can be saturated, again losing any information about the part at the point of saturation. If multiple pictures are taken and stitched together, or if photogrammetry is used and part of the object being measured is not captured in one of the pictures, there will be holes in the data.

Remember that atmospheric disturbances, such as cutting fluid mist, can also interfere with vision systems. Consequently, vision systems cannot be used in harsh environments. You might be able to use them in somewhat harsh environments such as machine tools, but their lenses must be covered, and you must open the cover only after all of the coolant has settled down. Also, camera lenses can fog-up if you are in a highhumidity environment.

Finally, some CCD arrays detect wavelengths that may not be visible to the human eye. Odd results can occur if you are inspecting hot parts (e.g., from heat treat). Of course, this does point out the fact that you might use IR (infrared) cameras to inspect objects.

Vision systems can be an excellent source for SPC data. This is an application for which hard gages, manual systems, and manual optical-inspection devices are not well suited. If you can capture the digital data from vision systems and process it, life is very good. If you do metrology or measurement with a shadowgraph, and the image is digitized, the data can be used for SPC.

Remember that vision systems generate huge amounts of information. Think about a five megapixel camera. If each pixel is one byte, you get 5 MB of data per frame. Typically, processing vision data requires lots of computing horsepower and memory. Most vision-based metrology systems will have more than a GB of RAM, and may have special vision-processing hardware to implement vision algorithms.

There are a number of algorithms that can be used on vision-system data. The key issue, however, is edge detection. If you are trying to measure the diameter of a hole, you will need to detect the edge (or circumference) of the hole, then determine what two parts of the circumference are further away from each other. To make a measurement, you must be able to detect an edge.

And if you are looking for markings (stampings on a surface) or blemishes, you will need to find surface nonconformities. These are typically identified by locating the edges of markings that should not be present, or not being able to locate edges that should be present. If you are looking for a scratch on a surface, you would look for the scratch, which would be interpreted as an edge. If you were searching for the alphanumeric mark reading 32 psi on a tire, you would find the mark by detecting edges.



This article was first published in the January 2007 edition of Manufacturing Engineering magazine.    

Published Date : 1/1/2007

Editor's Picks

Advanced Manufacturing Media - SME
U.S. Office  |  One SME Drive, Dearborn, MI 48128  |  Customer Care: 800.733.4763  |  313.425.3000
Canadian Office  |  7100 Woodbine Avenue, Suite 312, Markham, ON, L3R 5J2  888.322.7333
Tooling U  |   3615 Superior Avenue East, Building 44, 6th Floor, Cleveland, OH 44114  |  866.706.8665