Industry and manufacturing are increasingly turning to artificial intelligence (AI) to boost productivity and quality. However, one executive says AI needs to be deployed with care.
Dominic Gallello is CEO of Industrial Internet of Things and enterprise artificial intelligence company Symphony AzimaAI. He has led three public and private software companies over the past 16 years. His executive career also a stint at Autodesk.
Gallello was interviewed by email.
QUESTION: Can artificial intelligence (AI) burn out workers? I thought it was supposed to make things easier. What happened?
GALLELLO: If implemented correctly, no, it doesn’t burn out workers. But if you want to burn out a control room operator, provide them with a litany of alarms and recommendations that are disjointed and don’t boil down the problems that are occurring into meaningful recommendations.
This is where AI should come in and be a tremendous help. The computer can process and correlate a tremendous amount of information to infer insight that the human mind is not capable of.
But if implemented poorly, yes. We have seen over-and-over again that when implemented incorrectly, the AI throws off false alarms. The computer is saying there is a problem when there is none. That leads to wild goose chases. Spending your day eliminating false positives is a mind-numbing exercise for most.
QUESTION: You’ve reported an anecdote of a gas services company having a lot of false positives. Can you provide a little background and describe what happened?
GALLELLO: They were using a system based upon second-generation AI, so-called pattern recognition which is notorious for throwing at much as 60 percent false positives. Having to run them down and find out that there really is not a problem is time-consuming, expensive, and also very worrying to figure out what is real and what is not.
AI is necessary but insufficient. Legacy second-generation AI systems train the model on known machine faults and if the same fault occurs, the system will catch it. But what happens when you don’t have any history of a fault and then it occurs? The model will not catch it.
Third-generation AI turns the paradigm upside-down. We train the AI model on what is “normal” operating behavior and then we catch any anomalous behavior from the normal using third-generation unsupervised learning. Now, we’re just getting started.
A critical step is to connect the AI to a Failure Mode Effect Analysis library that accurately pinpoints the machine components in the machine that are the bad actors and provide recommendations to the maintenance engineers on actions to be taken.
QUESTION: Have companies started looking to AI as a panacea?
GALLELLO: Nobody does for long. In chasm theory terms, we are moving past the visionaries and early adopters. The early majority wants it to work, solve specific problems, and deliver financial results. This is a tough, “prove it to me” crowd.
QUESTION: Are there unintended consequences with AI? If so, what are they?
GALLELLO: AI is like any other technology shift. I liken it to the shift to object-oriented programming. There was a big rush to it, there were many poor implementations but eventually it became the norm. AI is going through that same journey.
QUESTION: What adjustments should companies make in how they deploy AI?
GALLELLO: Better to have engineers learn AI than having a division between domain expert engineers and pure data scientists. When subject matter experts can understand and harness the power of AI, it will lead to much faster and more successful implementations.