Skip to content

Taking automation to a new level with advances in 3D vision

Ed Sinkora
By Ed Sinkora Contributing Editor, SME Media
FANUC-R-1000iA-magnetic-bin-picking-bg-vision-683x1024.jpg
FANUC’s new 3D Vision Sensor uses a single shot to create a 3D point cloud of the field of view and then a variety of software tools to pick randomly oriented parts from a bin.

If there’s one thing you can say without reservation about manufacturing today, it’s that everybody wants more automation and flexibility. With advanced 3D vision and a multiaxis robot, companies can now automate to a degree their executives only dreamt of a few years ago. On top of that, companies can also reorient their automation to a greater degree than ever before possible.

3D vision tech firms are experiencing “continuous growth in the number and sophistication of 3D imaging components and related software,” said David Dechow, a staff engineer in FANUC America’s intelligent robotics/machine vision division.

Consider one of the most common automation tasks: bin picking.

A common approach would be to dump incoming parts into a bin and use a shaker unit to orient the parts as they are fed onto a conveyor line and picked up. “That usually works pretty well—until you need a new part, and now you have to spend $25,000–$50,000 to redesign or reimplement a new shaker unit,” said Keith Vozel, product manager for Yaskawa Motoman. “With our solution, you dump it in the same bin, point to a new CAD model and say, ‘Start processing this part’.”

Vozel was referring to Yaskawa Motoman’s new MotoSight 3D BinPick system, which uses “CAD matching” to identify and pick parts from a randomly oriented bin.

“Once you load a CAD model of your part into the system, you define various ways that the system can pick up the part,” he said. “As an example, if it’s a nut and you’re using a gripper, you can tell the system the gripper can go into the middle of the nut and expand to pick it up, or pick it up with one in the middle and one of the outside. The system will then automatically recognize the part and determine the best way to pick it based on its orientation in the bin, while also preventing collisions between the end-of-arm tooling and the bin or other parts. It will prioritize all these factors to get the best pick for that part without any programming.

“All the programmer is responsible for is what to do with the part once it’s been picked, such as moving it over to a machine and placing it in a chuck. Typical robot programming.”

Depending on part complexity, you can import about 200 parts into the system, which is more than most companies would need for this kind of manufacturing application. And since adding parts is as easy as adding a CAD file and doing a little programming, MotoSight 3D BinPick may well offer a lower cost of ownership than an inflexible system like a vibratory bowl.

Out on the shop floor, a high-performance Canon camera with an integrated lighting unit creates a 3D point cloud of a bin’s contents in a single shot, feeding that info to the MotoSight PC.

Yaskawa Motoman says many other solutions use two-step recognition technology in which a 3D area sensor must first find the part in the bin, and then once picked a second 2D recognition step detects the orientation of the part for processing.

“Because they’re not using a CAD model of the part,” said Vozel, “they don’t know exactly what they’re seeing and how they’re picking it. Having the CAD model to recognize what’s in the bin allows this to be done all at once, saving cycle time.”

The other claim here is that the use of CAD data enables the system to place parts with great precision, which would be necessary for many follow-on manufacturing tasks. But the superiority of this approach in terms of either speed or accuracy is debatable, as we’ll see.

Another one-shot method

FANUC’s new entry to the field is 3D Vision Sensor, which Dechow described as “a single-shot 3D stereo camera imaging system using pseudo random points, or point texture projection, to create a 3D point cloud of the field of view. We then use a variety of software tools in our bin picking and random part picking toolbox to extract information out of the 3D point cloud—anywhere from pattern matching, to blob analysis, to surface analysis, and a variety of other tools to see the variety of 3D parts that we might have to pick.”

FANUC already had a multi-image structured coded light system (3D Area Sensor), but the company introduced 3D Vision Sensor for applications that require higher speed than multi-shot 3D image sensors can deliver. That includes the processing of small bins or totes in manufacturing and applications like warehouse and distribution and retail order fulfillment. Or situations where the part may be moving, such as part tracking.

And while Dechow sees CAD matching as a valuable technology for situations where the potential number of incoming parts is constrained, it’s often not the right solution.

“In most situations you don’t need to identify what’s in the bin, you just need to unload it. FANUC uses multi-surface 3D pattern matching using a set of generic functions much like you do in 2D. Perhaps analysis of the neighboring peaks of the object, analysis of the 3D blob of the object over a constrained surface area, analysis of the object as a geometric primitive, and many others. These are the kinds of techniques you need to get truly non-homogeneous random parts out of a bin. CAD matching, while reasonable for constrained part sets, just doesn’t work over broader applications that have pretty much unconstrained part sets.”

CAD matching isn’t inherently more accurate in picking and placing parts either, Dechow said.

“Most modern 3D vision systems can locate and pick the part with a very high degree of accuracy. But in a truly random bin you will hardly ever be able to pick a part up exactly the way you’d prefer to pick it up. You almost always have to grip it and move it to a place where you grip it again. Finding the part is the easy thing. Gripping the part is the hard thing. You need algorithms that enable the robot to pick the part where it’s pickable and then use that to get it out of the bin. CAD matching is not going to solve all the problems of re-gripping or identifying where the part is. It’s just plain not a differentiating factor.”

Vozel disagreed: “3D CAD matching is what enables the MotoSight 3D BinPick to identify and specifically pick a part so that it can be accurately presented to the next step, without moving it and picking it again. Point cloud and image mapping, along with various software options do the hard work in one easy step, without extra processing time.”

Vozel does agree that different applications require different solutions, so other MotoSight products from Yaskawa Motoman include both 2D and other 3D products, each designed for the specific application.

AI and new degrees of speed

How quickly can a 3D imaging system process all this information? And is processing speed a limiting factor?

It depends.

Image capture and decision time for their single shot system varies widely depending on the object and the number of analysis tools executed, from roughly 300 milliseconds to roughly 1400 milliseconds, Dechow said.

“If we do complex matching it’s up to two or three seconds. But with robots the processing speed usually isn’t a huge issue because the picture is taken while the robot has picked up a part and is going somewhere else.”

As you’d expect, CAD matching takes longer.

“It’s four to seven seconds to take the picture, process the image to recognize the part, tell the robot what to do, and recycle the camera to be ready to come back and do another one,” Vozel said. But, he added, even if that’s slower than the current approach, the solution may still add value given its precision and repeatability.

Dechow also pointed out that it’s not always necessary to capture an image on every cycle.

“Depending on the size of the part and the size of the random pile, we try to get anywhere from five to 20 picks from every image. We go in and out as fast as the robot can move before we get the next image. But it’s highly dependent on the parts.”

For example, if a bin contains a few large parts and picking up one could disturb a neighboring part, the system would probably have to take a picture after every pick. But in a bin full of many small items on the surface, the system could probably pick from many areas without needing another image. It might “blacklist” an area until the next image after making a pick (in case nearby items were disturbed), but this would still leave multiple picks.

“You don’t usually get as many as you find but you try to get multiples, depending on the number at the surface,” he said.

But what if you need to recognize, pick, and place from thousands of diverse and randomly oriented objects at speed?

Hob Wubbena, VP for Universal Logic, said the answer is his company’s artificial intelligence (AI), named Neocortex, which “goes beyond what 3D vision alone can do. That means thousands or even hundreds of thousands of different items, typically picked at 600–1400 per hour.”

To do that, Universal’s 3D vision software, Spatial Vision, provides a 3D scan between every robot cycle. The system takes only 500–1200 milliseconds to capture and process an image, including “zeroing in on a range of interest, detecting edges, identifying parts, using AI to decide which part to pick—whether the one with the highest probability, the part closest to the robot, the one farthest away, etc.—and sending the pick point in six degrees of freedom to the robot.

“It provides the answer before the robot needs it. If the application requires even higher speeds, it can either take a single shot 3D scan and pick multiple items from that one image, or Neocortex can just pick items from the bin without recognizing the objects.”

Universal’s standard implementation is a complete robotic workcell called Neocortex G2R (goods to robot), which includes 3D vision with AI for tremendous versatility.

For order fulfillment, if it does not recognize an object, it automatically switches to geometric analysis/blob detection and just picks the item.

If 100% accuracy of a specific SKU (stock-keeping unit) pick is required (such as in pharmaceutical order fulfillment), a six-sided barcode scanning option has been tightly integrated.

In random bin picking, it can pick thousands of items on-the-fly through pattern, surface, and shape algorithms without full object recognition.

If high-quality robotic inspection or precise machine tending if required, Neocortex can use CAD matching with high-quality sensors, which typically takes 700 milliseconds to 2 seconds, resulting in accuracies to 70 microns.

Universal’s software is a modular platform that is agnostic with regard to both the sensor (which can be laser, LIDAR, structured light, time of flight, stereoptic vision, etc.) and the robot, though Yaskawa Motoman is a key partner in Universal’s “plug and play” G2R robot cell.

Universal invested heavily in high-end GPU processing with many parallel threads running to ensure that their software is always faster than the robot physics, Wubbena said. Fast enough to take a 3D image between every robot cycle and fast enough to pick from heterogeneous bins.

One of Universal’s customers currently picks from 90,000 unique items over the course of a year.

Universal hit the limit of 3D vision’s ability to distinguish between objects roughly five years ago, even though algorithms continue to improve, Wubbena said, citing the example of tightly packed boxes of the same height, explaining that 3D vision systems often can’t tell the difference between the gaps between the boxes and the gaps between the flaps on a given box.

“That’s something Neocortex AI can solve fairly well when integrated with 3D vision, which we started doing four years ago. Neocortex also extends perception for complex items, such as plants or intricate items. Our production systems, running in the field, are 99.5–99.98% reliable, defined as the percentage of operation without human intervention. That means you’d need human interaction for only one to thirteen minutes over 24 hours of operation.”

Another factor is the ability of Spatial Vision’s algorithms to determine the dimensions of physical objects within the field of view, which is accurate to within 1/20 of the sensor’s pixel. “Since our software works with any sensor, if the application requires greater 3D accuracy, just add or choose better fixed sensors, sensors on the robot arm, or both.”

Machine learning also takes over the vast majority of the programming required to pick diverse objects, with clear benefits.

“For instance, our Neocortex G2R system for order fulfillment, at initial installation and start up, has most of the basics to handle common consumer goods,” Wubbena said. “Every now and then if you have something strange or wildly different, you can switch from machine learning to human training mode. It takes a minute or two to train the AI system with a little human guidance and then you go right back to full automation.”

Wubbena pointed out that with systems requiring even as little as five minutes to configure the vision system to recognize each item, you’d need 8300 hours for 100,000 items, or four engineer years.

If it all sounds a little like science fiction, perhaps it’s well to remember that Universal Logic is a NASA spin-off and the technology was first applied in Robotnaut, the only humanoid robot in space, operating on the International Space Station.

Let the bot correct you

For Klas Bengtsson, global product manager at ABB Robotics, 3D vision systems can often be easier to use than 2D systems and the ever-increasing capacities of computers has made it easier to implement more 3D vision systems.

ABB recently developed a new quality inspection system with 3D vision.

“The robotic vision inspection system captures changes in part dimensions, providing not just a ‘yes’ or ‘no’ determination on the part, but information on how to correct certain features,” Bengtsson said.

ABB’s system uses a 3D white-light scanning sensor mounted to the arm of an ABB robot, relying on the agility of the robot to orient the sensor to access most areas of both simple and complex parts from the optimum angle. The system rapidly creates a point cloud of the object and compares the surface to the nominal CAD data, detecting even tiny deviations, enabling early detection of any production problems.

To give some quantitative sense of this capability, Bengtsson pointed out that a single capture has five-million pixels and the “inspection system is typically accurate down to 30 microns and below.”

Yet at the same time, the speed, accuracy and repeatability of the measuring system is independent of the robot’s accuracy and repeatability.

While this could theoretically be done while picking the part, he said, “given the amount of info required we typically do the dimensional check separately and often with more than one image. The real value is that you get instant feedback on the parts so you know if there is something you should adjust. Early detection means early rectification, preventing defective components from reaching the public domain. And you can also use the system to handle more mundane inspection tasks, like making sure parts are functionally fit and that various components will all actually fit together.”

How much?

How much do these sophisticated 3D vision systems cost?

Yaskawa Motoman’s new 3D vision subsystem lists at about $45,000, including everything that makes the subsystem work: the camera, the software that does the recognition, the software that interfaces to the robot controller, the high-performance graphics-oriented PC that all the software runs on, the cabling, the power supply, the monitor, the mouse…the whole system packaged in a rack-mounted unit.

The entire skid-mounted Neocortex G2R robot cell (including installation in North America and startup) has a 12–18 month payback at “half the cost of labor over a five-year period—$7 per hour, whether lease or purchase,” Wubbena said. “There are Neocortex systems [using all major robot brands] that have been operating in retail trade, wholesale trade, manufacturing, and agriculture for up to four years. Think of it as a virtual employee that can be re-deployed as processes and needs change.”

  • View All Articles
  • Connect With Us
    TwitterFacebookLinkedInYouTube

Related Articles

Webinars, White Papers and More!

SME's Manufacturing Resource Center keeps you updated on all of the latest industry trends and information. Access unlimited FREE webinars, white papers, eBooks, case studies and reports now!