As artificial intelligence (AI) continues to be seen as a smart manufacturing superpower for a host of shop floor challenges, software makers are ramping up development and deployment efforts.
According to IBM, an increasing number of businesses, about 35% globally, are using AI, and another 42% are exploring the technology. The development of generative AI—which uses powerful foundation models that train on large amounts of unlabeled data—can be adapted to new use cases and bring flexibility and scalability to accelerate the adoption of AI significantly. While AI has received much attention in recent years, there is confusion around how machine learning (ML), and its various forms, fits into the AI conversation. To cut through the complexity, experts say it’s important to understand the differences, use cases and the types of problems the need solving.
Machine learning is a subset of artificial intelligence that allows for optimization. When set up correctly, ML helps people make more accurate predictions across business processes and operations.
Classic or “non-deep” machine learning depends on human intervention to allow a computer system to identify patterns, learn, perform specific tasks and provide accurate results. Here, humans determine the hierarchy of features to understand the differences between data inputs, usually requiring more structured data to learn.
While the subset of AI called adaptive or “deep” machine learning can leverage labeled datasets to inform its algorithm in supervised learning, it doesn’t require a labeled dataset. It can ingest unstructured data in its raw form, such as text or images, and automatically determine the set of features that distinguish specific objects, machines, etc.
A third category of ML is reinforcement learning, where a computer learns by interacting with its surroundings and getting feedback—rewards or penalties—for its actions.
“Machine learning is properly suited to support repetitive tasks or problems where the feature dimension is known and the defined set of attributes is given,” explains David Vallejo, vice president and global head of digital supply chain at SAP. “For example, I have a disruption in manufacturing—the machine is down, and I have to reroute the job to a different machine. Then machine learning can help by saying, ‘Based on the attributes of this job, I can reroute the job to a different machine—call it ‘find first slot.’”
The type and amount of available data are driving decision-making, adds Nilesh Raghuvanshi, senior data scientist at Siemens Digital Industries Software.
Raghuvanshi recommends comparing apples and oranges. To use classic ML (rather than adaptive ML), problem-solvers need to organize structured data in rows and columns. For example, the rows and columns should list specific features (such as size, weight, texture and color) of apples and oranges that feed into the ML algorithms. He also notes that a domain expert is needed to outline what to look for in the images.
“If I have a problem, and the input data which I’m going to use to tackle that problem is structured data—by that, I mean data nicely organized into rows and columns of different features—then almost always my choice would be to go for classic machine learning,” Raghuvanshi says, noting that such algorithms have proven to excel at handling this type of data.
In contrast to classic ML, deep learning can tackle the problem of ambiguity with raw unstructured data, such as images of apples and oranges, Raghuvanshi says. There is no need for a domain expert when using deep learning.
“If I’m dealing with unstructured data such as text, natural language, images, audio, video—highly complex data that encapsulates a lot of complex, non-linear relationships—the default choice, in my opinion, will always be a deep-learning algorithm,” he says. “In almost all cases, I can just fill in the raw image data, and I can get the desired output with very good performance.”
Sydney-based Advanced Navigation Pty. provides these types of AI-powered navigation solutions. According to CEO and Co-Founder Xavier Orr, the manufacturing facility uses classic ML for automated visual inspections of printed circuit boards. At the end of the production line, the factory employs optical inspections based on advanced neural network AI.
“The earlier stage inspection is less adaptive, more certain,” Orr says. “The later stage is more adaptive with more variability. The neural network can give that flexibility.”
Stephen Hooper, VP of design and manufacturing product development at San Francisco-based Autodesk Inc., adds that generative AI “is very good where there is a degree of freedom in what you’re looking for.”
For example, Autodesk used generative AI and worked with Matsuura Machinery Corp. to create a unique workflow using toolpaths as an input to its product, Generative Design in Fusion. The new 3D-geometric model is based on loading conditions and constraints such as materials, force and the number of and type of bolts, as well as allowable deflection under load, Hooper says. He notes that, without generative AI, creating such a model could take a week.
“By using generative AI, we were able to 3D print the fixture overnight so we could do machining in the morning,” he says. “You don’t worry about whether the fixture has a specific form—you worry about whether it meets the requirements you specify.”
One challenge with large language model-driven generative AI: If you give it the same prompt three different times, it could yield three different responses.
“This does not build confidence with engineers, particularly when they’re looking for precise outcomes,” Hooper says. “Where we’ve struggled is when you start to use hard mathematics or engineering principles with precision.”
In contrast, according to Hooper, classic ML excels in preventative maintenance—looking for machine behavior that is or is about to be out of spec. One example is producing high-value parts for the aerospace industry, when the part can be worth more than the cost of repairing the machine producing the part.
“Machine learning can offer useful tools to look at thousands of different types of machines, jobs run and evaluate whether the machine is exhibiting behavior that looks like it’s about to fail or operate outside of tolerance,” Hooper says.
As an example, Hooper suggests using a machine monitoring product, like the one offered by Mazak Corp. He says that this tool can monitor if a spindle is vibrating outside of tolerance, compensate programming if it is and order a new spindle if it is close to failing.
“You’re using machine learning to evaluate past performance and do predictive analysis,” Hooper says. “Sometimes a machine could be down for four weeks or even three months. That could kill a small shop.”
Autodesk’s FlexSim uses ML to predict inventory flow through a factory. As an example, Hooper cites a case in which the average processing time for a part is supposed to be two minutes but might vary by a few seconds. Manufacturers want to predict the flow of inventory more precisely. However, without such software, he says that managers need to be on the shop floor and time things manually with a stopwatch, then feed that data into a statistical analysis tool.
If a manufacturer implements machine monitoring sensors combined with Autodesk’s FlexSim, it can use that data to predict the flow of inventory through the factory and look for bottlenecks in the process.
“You might find one machine is slowing the whole process down,” Hooper says. “You introduce two machines, and the whole process will speed up.”
With FlexSim, he says, manufacturers can run “what if” scenarios to determine the best way to increase performance at the lowest cost, which might be adding another machine, rearranging the process or other alternatives.
“We’ll be able to tell the user and the operator exactly where each machine in the factory should be placed,” Hooper says.
“Most manufacturers want to run lean manufacturing with as small an amount of inventory as possible and a high machine utilization,” he continues. “You can use this technology to simulate the flow of product through the factory and minimize the amount of inventory in use, reduce the amount of cash tied up inventory and maximize machine effectiveness.
Other companies are enabling similar successes with AI technology. For example, Raghuvanshi says Siemens’ Selection Prediction in its NX Platform helps users plan and design what they are going to make on the factory floor.
Meanwhile, SAP has developed ML capabilities to use in both manufacturing and warehouse production. The company uses these capabilities to deal with any kind of disruption as well as for predictive maintenance of machines, according to Vallejo.
“When monitoring conditions and analyzing machine behavior, we want to get ahead of the game when it comes to predicting what a machine will do,” he says.
Classic ML can also help with supply chain predictions.
“Instead of a static prediction, machine learning can help with mining operational data to provide a risk-adjusted model of what the lead time is going to be,” Vallejo notes. SAP now offers this capability as part of its integrated business-planning platform.
Through a partnership with Software AG, SAP’s TrendMiner application uses ML to collect IoT data to create visualizations and predictions based on hundreds of sensors in manufacturing extension.
By contrast, deep-learning AI excels at tasks with more unstructured data such as visual inspections for quality, Vallejo explains.
“When you produce hundreds and thousands of products, recording non-conformance is a daunting task,” he says. “Humans have to be in place to do that.”
With deep learning, Vallejo adds, users take pictures and train the model on what good looks like and what is a potential deviation. “You use the pre-trained model later for visual inspection,” he says, noting that SAP’s Digital Manufacturing platform offers an AI-powered, deep-learning visual inspection.
SAP has demonstrated a capability that will make shift changes easier at factories, according to Vallejo. Usually, at a shift handover, the outgoing shift has to collect data and documents from machines and people on the floor, then summarize everything for the new shift. This capability will be added to SAP’s existing digital supply chain management software.
“Generative AI gives a really good summary of the machine-produced data,” says Vallejo. “The new shift can ask questions with a prompt, cutting down on the shift handover and improving accuracy. We’re working with customers and have gotten really good feedback. We’re now productizing this for use for all customers.”
Raghuvanshi adds to this perspective, noting that generative AI/deep learning models require huge amounts of data to train. With only—yes only—hundreds or thousands of examples to use to teach the model, problem-solvers must choose a classic ML model because deep-learning algorithms require very large amounts of data to train and get meaningful performance, he notes. If users are relying on image data or other forms of unstructured data, then the model requires a lot of information.
“If I have to do this task using image data, then I need lots of data so that my deep-learning algorithm will be able to figure out the useful features out of it,” Vallejo says. But if minimal data is fed into these algorithms, which he says are typically artificial neural networks, this results in “overfitting, which is memorizing all the data.”
With memorizing data, the model tends to do extremely well when it sees similar examples. The problem, Vallejo points out, is that similar examples are rare in the real world.
“If the model is faced with something a little different or the kind of variation you expect in the real world, then it will not perform really well on that,” he says. “That’s the reason why you need a lot of data and why you choose deep learning when you have access to a lot of data that has a lot of variations built into it.”
Classic ML algorithms are easier to train, which makes it possible to perform more experiments in the same amount of time, Raghuvanshi adds. He notes that ML models require less memory/computational muscle, so they are easier to deploy. Beyond the type and amount of data, interpretation of the final outcome is a key consideration.
“In certain industries like healthcare and insurance, you have to have some kind of explanation to provide why a model came out with a particular decision or prediction,” Raghuvanshi says.
But it’s difficult to peek into deep learning methods and understand why the method arrived at a particular decision. When it is important in a particular context, Raghuvanshi suggests choosing a classic ML algorithm.
“They are relatively easier to interpret compared to deep-learning methods, which we normally refer to as some kind of black box.”
Connect With Us