Every human experiences fear, and they do so in their own individual way. In the AI world, fear is driven by unfamiliarity with the process, the professional impact of failure and the daunting tasks of pulling together all of the people and perspectives required just to get started. In aerospace, this challenge is compounded by resource shortages, supply chain challenges and market volatility. Add massive amounts of disparate, disconnected data and you have the recipe for gridlock, failed projects and wasted money.
Failure to launch is real. Here are three key lessons learned from our work with Rolls-Royce and Gulfstream that will clear your AI project for takeoff.
One bite at a time. Aerospace companies want to increase operating margins. From reduced downtime to predictive quality to forecasting movement of critical parts, there are plenty of opportunities. Don’t get caught up in all of the possibilities. Find one that can be accomplished quickly and establish your team and vendor’s credibility. This is important because AI is an iterative process. If the availability of data and subject matter expertise is inconsistent, your project will stall. At Rolls-Royce, we discussed use cases for Predictive Maintenance and scrap reduction before settling on Predictive Quality at the test stand. Why? Because the data and customer resources we needed were readily available and the path for integrating the capability was clear. Think Occam’s Razor here: The most straightforward path is the best.
A key point of contention between the business and IT is who will retain control of this new capability and how it will be supported and maintained. Cloud vendors tout quick starts through readily available, affordable infrastructure. For the business, this represents a clear path to get started without the bureaucracy of IT processes. However, this can lead to technical lock-in; a situation where all of your logic, models and data processes are stuck within one provider’s systems. Costs will rise, flexibility will be limited and you’ll feel trapped. Remember, the value is in the logic, models and data constructs—not necessarily the infrastructure. At Gulfstream, we avoided this trap by leveraging containerization, an approach that allows your logic and functionality to be completely portable, deployed and scaled across any combination of infrastructure based on requirements and cost. Gulfstream was able to maintain control of their intellectual property and could scale it to the provider(s) of their choice.
It’s easy to fall in love with a quick, simple and affordable use case. But what happens if you want to scale it to multiple lines, plants and processes? Is it still quick, simple and affordable? Cloud solutions can be deceptive in this regard; storage is inexpensive but the computing capacity required to train and tune models can get costly as datasets grow. One customer projected a 35× increase in cloud costs associated with scaling a predictive maintenance model across four plant operations. Again, containerization can be used to optimize costs of scalability. At Rolls-Royce, we leveraged containerization to retrain models on lower cost local infrastructure, then deployed the retrained models to cloud environments for optimal access and availability. This helped keep scale cost under 5 percent annually.
Managing complexity, control and cost is core to the success of any machine learning or AI effort. We’ve leverage templates developed over years of projects to simplify, understand and communicate these guidelines across key project teams.
Connect With Us