Skip to content

How to save a cool $100 million. Each year.

By Karen Haywood Queen Contributing Editor, SME Media

Manufacturers are moving past data generation—to data crunching. Wading into the ‘data swamp’

A major aerospace equipment manufacturer struggled to balance supply and demand. The failure resulted in long lead times, high inventory, rising costs and an inability to meet customer demand. The manufacturer risked loss of market share.

Jeremiah-Stone1-300x225.jpg
“When we started four years ago, the assumption was that if you gathered all the data you could into one place, magic would happen,” Jeremiah Stone, VP and asset performance manager at GE Digital, said. “We learned that the probability of magic happening was remote.” Above, Stone spoke recently on the Industrial Internet of Things at the Computer History Museum in Mountain View, CA.

The answer? Leveraging mounds of data the company already had—to identify problems during production, Steve Shepley, principal, manufacturing at Deloitte, said.

“The value to the customer was in getting the product quicker,” he said.

The value to the manufacturer: delivery times reduced by 45%, total inventory reduced by 22% and production savings of $100 million a year.

The value might also come in preventing equipment failure.

In the last few years, GE’s aviation clients saw that some FAA-certified GE airplane engines were degrading at faster rates than expected, Jeremiah Stone, vice president and asset performance manager at GE Digital, said.

“That’s bad—you never want an engine to perform differently than expected,” he said.

Digging into the data revealed that the degradation was not due to manufacturing issues. Instead, the problems correlated with planes that frequently flew through the Far East, where there are high amounts of particulate pollution in the air, and the Middle East, which has an abundance of sand, Stone said. Those atmospheric conditions caused friction that led to faster-than-expected degradation of the fan blades.

“Just by looking at the fan blades, we couldn’t see it. It’s a hidden history,” he said. “When an airline buys a plane, they don’t tell you where they’re going to fly it. But by correlating the condition of the blades with the flight plans, we could figure out the relationship.”

Based on that knowledge, the airlines changed the takeoff techniques in those parts of the world, Stone said. Instead of the gas-saving low-slope takeoffs, pilots now do a full-power lift off to get out of the sand or pollution as quickly as possible. On the maintenance side, the engines on planes that fly in those areas are washed more often to remove the grit.

“Three groups benefited,” Stone said. “GE benefitted because many of our contracts are performance based. Customers don’t just buy the engine for a fixed price. They buy a service contract. If we aren’t meeting our contractual obligations, we’re going to suffer penalties and lower profits. The customer benefits from having stable capacity, not having to have engines pulled offline. Regulators benefit because they can support the operators and engine manufacturers in delivering safe transportation to the marketplace at the lowest aggregate cost possible.”

But one problem manufacturers face in leveraging big data is … all that data.

“When we started four years ago, the assumption was that if you gathered all the data you could into one place, magic would happen,” Stone said. “We learned that the probability of magic happening was remote.”

“Big data is definitely becoming bigger and bigger on the shop floor,” JP Provencher, vice president, manufacturing strategy and IoT solutions at PTC, said. “The problem for customers is not generating data. They’re already generating data. But they’re using only 3 or 4%. The challenge is making it meaningful.”

“Over 86% of the data lake is disconnected,” Ken Tsai, head of database and data management product marketing at SAP, said, using the term that describes data and data signals, which in this case is not being shared with the rest of the business. “It’s not connected to the applications. It’s a data swamp. That’s why you have to have a process of dealing with raw data that may or may not have any importance.”

“Companies want to comb through and understand the data that are high value and store that data,” he said. “The raw data that may have value sometime—they want to store it too.”

Getting selective about data

Deciding precisely which data to leverage is critical. Too often, companies get bogged down as they seek to organize, understand and achieve 100% accuracy reporting from every data point.

The answer to optimize production and shorten delivery cycles might rest in nine data points out of 6000, as happened with another Deloitte client, Shepley said.

“Our clients rarely, if ever, have a big data problem,” he said. “Most often, they have an information problem and an inability to get to an action problem.”

It is common for manufacturers to ask for help building “a massive high-speed database” to handle mushrooming data volume, Shepley said. And, he added, it is common for the companies to have “no idea” which data is creating value for them.

Helping plant managers

Plant managers also want data immediately.

“More and more plant managers want real-time visibility to operators,” Provencher said. “They can no longer work with a report that tells them Thursday morning everything that went wrong Wednesday. They want that data now—to make faster and better decisions.”

Manufacturers can’t find what they need if they don’t know what they’re looking for. To best approach the issue, manufacturers should set their business goals and work backwards to crunch only the data they need. Let the rest go.

“It may sound obvious, but there are a lot of fishing expeditions out there,” Stone said. “Now we’re more focused on the end state to determine the data and data model we need. It’s better methodology to link outcomes to the needed data.”

But the old techniques and tools don’t work well to extract, manage and leverage operation-side data.

“We discovered that techniques that work so well for transactional back-office data don’t work so well for operational, big data,” Stone said. “We had to unlearn everything we had learned from the enterprise world in terms of extraction and modeling technology.”

For example, most business intelligence systems for back-office data operate with highly structured data models labeled in ways that are easy for the person modeling the data to understand, he said. An employee could very well be described with an ID number, first name and last name.

Not so much in the operational world.

For example, a pump might be labeled STH PMP 100. Unless you work with that system, you wouldn’t know the label stands for pump 100 in the south wing.

“In the operational world, we are often dealing with data coming out of SCADA systems or control systems,” Stone said. “The data has no inherent structure. The data is named in a very cryptic way.”

Part of the reason for those cryptic names is limited processing space decades ago when the pump, sensor or other equipment was commissioned, he said.

“The industrial world, compared with the consumer world, uses lower levels of computing ability,” Stone said. “When I’m a software engineer in the industrial world, I have a limited amount of space to work with.” Hence, the shorter, cryptic names.

“Traditionally, the engineers know what those names are,” he said. “They replicate all the data out of the control system into a big database.”

But those engineers don’t have the skill set to build a system to model that data.

It’s laborious to build a new data model out of dozens of source systems because of the variety of sources and variety of structuring of data, Stone said.

“You bring in IT people who don’t know what those names mean. You have to find the person who built the data schema to understand what it means because there is no added structure on top of the time structure. You have to find the engineers who know what STH PMP 100 means,” he added.

“Which one is the input pressure? Which one is the output pressure? Do they work here anymore? Are those engineers alive? You’re talking about assets that may have been commissioned 10–40 years ago. You can imagine this is a detective exercise.”

The old way meant an engineer had to spend many man hours using Excel to integrate multiple legacy sources into a single data model, Stone said. For one GE client—an oil and gas drilling company—the process took longer than six months, he said. Highly trained engineers spent enormous amounts of time “doing basic drudgery: data modeling and data management.”

The data modeling/discovery portion of any big data project has to be planned out on a wall calendar “because it takes so long,” he said. “The tools we have built in the last few decades are effectively useless. Those tools make assumptions that data is inherently structured. It is the back-office world; it’s not in the operational world.”

Another problem: These legacy systems have been designed with long replacement cycles, creating a challenge to achieve benefits from today’s technology without upgrading and recommissioning an entire plant, Stone said.

Freeing engineering talent from drudge work

These new tools and techniques are speeding up the process and freeing up engineering talent from drudge work.

GE last year bought Bit Stew Systems and its artificial intelligence Data Management Work Bench and Mix Core data integration platforms. The mapping model that took longer than 6 months to build would now take 4.5 hours with BitStew’s data ingestion and modeling technology, which uses advanced artificial intelligence techniques to derive the correct data relationships and meaning, he said.

Similarly, SAP’s HANA platform has enabled companies to generate reports in minutes or even seconds instead of hours or days, Tsai said.

At GE, engineering consultants focus now on rating work done by the artificial intelligence, Stone said.

“Your primary improvement is in human productivity,” he said. “Before, it was like being on a wild goose chase to figure out what was the root cause of the change in quality. With these tools, we take away a lot of that manual process from our engineers. They can do their job, which is analysis, not going over data sets manually. They can become significantly more productive and effective. We’ve moved past the stage of experimentation and into the stage of performance.”

Working to connect all data sources

Key steps are connecting all the data sources and then having the flexibility to deliver that information through role-based applications that show exactly what data each person needs to see and digest to achieve his goals, Provencher said.

In addition to freeing up engineers, proper management of big data can emancipate other workers, too.

One Deloitte client that makes construction equipment was losing market share because its delivery times were too vague.

“They were struggling in one of their production facilities with how long it would take to get through quality controls and get to the customer,” Shepley said.
Meantime, the manufacturer’s clients were paying specialized, high-cost technical talent to stand around and wait for those deliveries, he said.

“They were suffering massive amounts of lost sales,” Shepley said. “Their competitors were eating their lunch.”

The manufacturer is now competitive again.

Crunching data from sensors

Deloitte’s platform helped the company use data from sensors and other data points to understand where production and quality control were getting hung up and resolve the issues, he said.

The manufacturer used to give delivery dates of plus or minus 20 days. Now, it provides delivery estimates accurate to the hour or better.

“When you focus on the right outcome, the data that you need is far less than you expected,” Shepley said.

Finally, manufacturers also must move beyond the approach of replacing all their current technology as they seek to solve new problems.

In the past, when companies wanted to improve their manufacturing system landscape, they replaced the previous vendor’s software with a complete new suite of software and/or technology, a “rip-and-replace approach,” Provencher said.

“That doesn’t work anymore, and there is no more appetite for multi-year disruptive rollouts,” he added.

Overlaying IoT platform on existing assets

The new approach is overlaying an IoT platform on top of existing manufacturing software and assets, Provencher said.

“We tell our customers ‘Keep all the systems you have and investments you made so far’,” he said. “Stop trying to replace and build one-to-one integrations. An IoT platform connects to all your existing assets and systems, talks all these languages and allows companies to transform existing data into new sources of insights and business value.”

Companies need to start with the end goal in mind and work backwards to decide which data they need and leverage that data to achieve the goal.

“Instead of flooding them with all the raw data, allow them to create role-based applications with machine and business data in one screen, showing exactly what they need and what they’re interested in,” Provencher said. “They have access to real-time data, simplified so they see only what they care about.”

Focusing less on a specific technology

Getting manufacturers to focus on business outcomes rather than specific technology will help, Shepley said.

“Most often, the client has bought technology and is trying to find homes for it. We focus on re-vectoring to the business outcome. We model the outcome first. We build backwards from the problem and see what processes, technology and data you need to support that new action. You might need to cobble together multiple technologies. No one technology gets you to that outcome.”

Failure may pave the way to success.

“Our clients get over the need to follow the traditional 12-month capital cycle,” Shepley said. “Our clients can use our platforms and tools, experiment, fail fast if they’re going to fail and then move ahead.”

Creating teams with varying skill sets

To make all this work, more than one single skill set is needed, Shepley said.

A multi-disciplinary team should include the expected data scientists, subject matter experts and coders.

“They do the heavy lifting on the back end, building a platform that runs at the speed needed to deliver results,” he said. To protect the platform and the data, security experts are needed who understand both IT and OT security.

But also critical are people who know the business issues and so-called experience designers who often have artistic or social science backgrounds, Shepley said.

People who know the business case will make sure the outcome creates business value, he said.

“If you don’t have these people involved, you may create advanced, slick solutions, but you won’t have any business value, often the most important cog in the wheel.”

Integrating the data into a frictionless environment to extract business “is really the Holy Grail all of us are trying to go through,” Tsai said.

“Big data is now happening everywhere,” he added. “We are working to lower the threshold.”

  • View All Articles
  • Connect With Us
    TwitterFacebookLinkedInYouTube

Related Articles

Always Stay Informed

Receive the latest manufacturing news and technical information by subscribing to our monthly and quarterly magazines, weekly and monthly eNewsletters, and podcast channel.