Dijam, what is GridRaster?
GridRaster has developed a cloud platform that enables truly immersive augmented reality (AR) and virtual reality (VR) on mobile devices. Its high-performance scalable AR/VR cloud platform is powered by distributed edge computing, low-latency remote rendering and 3D AI-based Spatial Mapping..It takes the best of the gaming technology and concepts, the power of the cloud computing and 3D AI, and the immersiveness of AR/VR to give superpowers to the workers on the a manufacturing floor.
What led you to establish the firm?
While co-founder Rishi Ranjan and I were working in our previous jobs, the Oculus DK1 VR headset came out. The first time we experienced the DK1, we were blown away: We thought, “OK. This is the next computing platform.” But we knew that the way this experience was provided to us—DK1 was tethered to a PC—would never scale. At Qualcomm, Rishi worked on how to make that possible on the mobile phone. We believed that the standalone mobile was the opportunity where you could take this really powerful medium to the masses. And in pursuit of that process, we left our jobs and started GridRaster in 2015. We realized that we could bring those mixed-reality experiences to a mobile device by doing all the heavy lifting on the cloud.
What has transpired in the last five years with regard to your work in manufacturing?
We started in gaming. But two years down the road, we began to see the manufacturing industry beginning to dabble with VR and AR. When we did some initial implementations with our early customers, we could see that the value we could bring to manufacturers was immense compared with gaming. The beauty of manufacturing and the enterprise setup is a lot of variables—like the network, the content—are already there and the use cases are already identified. So we decided to adapt the technology we were building to work in manufacturing.
With our system, you can now bring any CAD model seamlessly onto the platform and visualize on different devices, whether it’s a Microsoft HoloLens, an Oculus Quest, or any Android device. We can bring in those high-end models and seamlessly visualize on those devices—letting you play around with them as if they’re right in front of you.
Can you name some of your manufacturing-industry customers?
I can tell you we’re working with one of the largest France-based car manufacturers, as well as an aerospace supplier and a bunch of telcos in the U.S. and in Europe.
Why do you believe more manufacturers moving toward automation technologies?
This medium is giving superpowers to the frontline workers. To date, the AR/VR technology we’ve seen has been mostly for the white-collar worker. But AR gives this capability to people who are working in the field.
For example, say a field technician in an airplane hangar is assigned to repair an engine harness of large aircraft. He used to have to go through a whole process of reading through the physical manual and trying to figure out the different components and different steps involved. But now, with this system, the power has been given to him so that, as he begins repairing the harness, all of the instructions appear right in front of his eyes. And he can get assistance if it is required: He can call an expert who can observe what the technician is doing, live. So, today they can accomplish the same task with about 40 percent to 50 percent greater efficiency. That’s mind-boggling in aerospace and defense, where the money lost when an aircraft is grounded is huge.
What’s your favorite business case in your work?
In the automotive industry, they used to use clay modeling to conceptualize and design the next generation of cars. It is very time consuming and costly, and it used to take years. And there used to be a lot of rework involved. The moment AR or mixed reality came in, it was a huge leap for them in terms of shrinking those design cycles from years to months. Today, the base structure is still created in clay or foam first. But you can now do all the updates and iterations virtually, aligning those virtual objects on top of that physical structure. You can try as many options you want—all in the real time—without physically changing things. And once you freeze your design, you can talk about translating it into the physical structure. It saves millions of dollars.
What about workforce considerations? What specific examples might you have of companies using these technologies to complement jobs?
We have been working with an aerospace and defense manufacturer that constructs really large spacecraft. Final assembly of the whole spacecraft requires using precisely-placed fasteners. And the craft is a really complex structure with a lot of wiring harnesses. Earlier, they had been doing the work physically, by taking physical measurements and putting markers here and there. The process was very time consuming. Now, by wearing just our AR glass, the workers will be able to see precisely where each fastener needs to be placed. The technicians doing the assembly get all necessary instructions right in front of their eyes, and it’s all hands free. It has quickened the whole process by 90 percent.
And the savings are huge. With each fastener, because of the productivity gain, you are talking about saving about $38 by using AR. And each spacecraft has about 60,000 fasteners.
What can you say about the importance of your visualization tech being cloud based?
Our manufacturing customers have been looking to leverage AR/VR across the product lifecycle—design and engineering, operations and maintenance, training and marketing—at scale. To achieve that goal, it’s important to be cloud-based because it’s the only way to get a high fidelity, data-rich VR/AR experience on the small mobile form factor. Current smart phones and tablets have an enormous amount of processing power and inboard memory compared with a decade ago, but that’s still not enough to handle the requirements of truly immersive VR/AR.
In the last 24 months, most of the manufacturing companies we work with have gone through different proofs of concept and pilots for using AR. Once they see the value, they begin to scale. The problem with scaling with a standalone device and today’s infrastructure is that it becomes a painful process of taking large 3D CAD models and trying to decimate and optimize them so they can fit into a different, less powerful, device. In the process, you are dumbing down the model and defeating the purpose for which the 3D CAD models have been created. You’re losing a lot of design intent and hierarchy. But by bringing in the cloud, the device and computing constraints are completely gone. Now you can bring a rich CAD model seamlessly onto that device and visualize it in complete full fidelity. The cloud also makes it much easier to integrate to different systems.
Also, a key ability of AR is being able to align virtual objects very precisely on top of physical assets. Let’s go back to our spacecraft assembly example. The success of that application is a function of how finely you can map your physical world into the virtual world—in this case, on precisely mapping fastener locations on the physical structure of the components.. Since we use the cloud, we can create very fine mesh, which allows us to anchor these virtual objects very precisely on top of the physical assets. That capability also lets you to do a lot of other automation like automated defect detection.
Connect With Us