What Are Digital Twins, and Why Are They Important?

How using simulation and digital twins lowered development costs by an order of magnitude

A recent report by Market Study Report, LLC projects that the digital twin market will surpass $20 billion by 2025. What explains the growth? The government of Singapore, for one example, is spending over $73 million to build a “data-rich digital twin and improve public services with reduced cost for its citizens.” Like running a city, launching a product comes with significant expenses and risks. It’s important to have a relatively fast and inexpensive way to figure out if your product is going to work well in the real world.

At RSS, many of our clients come to us because their research and development process takes too long and is too costly, with the risk often outweighing the results. When we create a digital twin of a client’s product, it’s a game changer. We can accelerate product development and reduce risk by an order of magnitude. Here’s how it works.

The Problem with Traditional R&D in Hardware

When companies are preparing a product, it’s not uncommon for the research and development timeline to last a year or longer. Hardware is hard; that’s why so many software start-ups struggle when they get into the hardware business. They are used to a pace of iteration and development that isn’t possible with traditional hardware development cycles. 

Another problem companies face is that a long and expensive R&D timeline increases risk and reduces their ability to experiment and tweak. There’s always the possibility that after the company spends time and money on an iteration, that iteration may not work. And even if the prototype is close to perfect, they have to start the process over again.

How Digital Twins Solve These Problems

At RSS, we combat these problems using digital twins. According to the IBM UK Technical Consultancy Group (TCG), a digital twin is defined as “a dynamic virtual representation of a physical object or system. . . . It uses real-world data, simulation or machine learning models, combined with data analysis, to enable understanding, learning, and reasoning.”Using a digital twin gets us much closer to a software-style development cycle. Companies can iterate on design, conduct experiments, test assumptions, interact with the product, and repeat that cycle until they get it right. And because it’s all happening digitally, the yearlong timeline is reduced to a matter of months or weeks.Using digital twins also allows companies to experiment in a low-risk environment. You don’t have to spend money on procurement, materials, and production, and you know much sooner if you need to make changes before moving forward. Traditionally, the hardware development and design process is separate from the manufacturing process, which increases the risk of spending too much on an undesired result. But with digital twins, the development and design stage essentially becomes part of the manufacturing process, so your time and cost are reduced, and your desired results come much sooner. Finally, digital twins are a boon to marketing. With the images and footage that digital twins provide, companies can release promotional materials earlier and start selling their products even before they have created a physical prototype.

Creating a Digital Twin of a Toy Drone

At RSS, we helped one of our clients, a major toy vendor, accelerate the product design of a toy quadcopter drone. We created a simulation of the drone that was physically accurate in terms of weight, weight distribution, center of gravity, thrust per rotor, and other characteristics.

The simulation helped the client prove that, when manufactured, their drone would be stable in flight. The simulation also provided a platform on which to develop special higher-level features of the product, like facial recognition and tracking the user during flight. 

Simulating the Drone System

Simulating the drone involved simulating an entire system. Not only did we create a digital motor, body, and rotors, but we also incorporated principles of physics, including torque, energy consumption, and mass, to see how all these elements would interact in the real world. 

For example, when we simulated the rotors, we wanted to see how long it would take them to get up to speed and exactly how fast they would need to turn to fly well and perform maneuvers the client was expecting. We validated and tuned the speed of the rotors until the drone simulation could perform the maneuvers with speed and accuracy. 

We also needed to see how much torque we could apply to the rotors based on their mass. We knew, of course, that the battery would eventually drain, so we needed to calculate how much battery charge it would take to spin the rotor. The client gave us their best guesses about how much power the motor would draw at different rotor spin rates, and we applied those numbers to see what would happen. 

We were able to validate and tune the design, demonstrating how it affected—and was affected by—every aspect of the system. Unlike drones in a video game, this digital twin included a simulation of physics principles, rotors, sound, and wind, creating a true representation of how the drone would behave in real life. Additionally, the simulation made it possible to test proposed paint schemes, colors, and some special maneuvers the drone could perform. None of this would've been possible without the use of digital twins.

Simulating Controls

It was important that our client could test how the drone would interact with the user. With our simulation, the client could put on a headset and simply turn their head or walk around as though they were really using the drone. The headset we used leveraged the Oculus Avatar SDK to provide user presence and used spatial context to allow the Oculus touch controllers to behave differently depending on user intent.

For example, let’s say the user is trying to get the drone to accelerate very quickly, which causes the drone to tilt too far forward. The Oculus Avatar SDK would sense the user’s intent to move the drone quickly but would adjust the rotors to keep the drone at a safe angle. Ultimately, the drone simulation had a small lag behind the user’s controls, but the lag is realistic for a physical drone. Our clients were able to experience these controls and see how they felt for a user, all before they had to manufacture a physical prototype. 

Face Tracking

Our client needed their drone to be able to track the user’s face. For this, we tested ray casting in our 3D simulation. With ray casting, the drone senses if there is any object between the camera and a face. If the ray cast hits the face, the drone can “see” the face. If something is in the way, the face isn’t seen.

We also tested the drone’s follow mode, which is when the drone is constantly looking for a face and noticing if the face is obscured. In the real world, users would have to look to see if the drone is going to fly into something it shouldn’t. With our simulation, we were able to test how well the drone could analyze a video frame—without having to actually analyze a video frame. This feature could have taken months to test, but we were able to test and tune it within weeks for our client.

How VR and UX Play a Role 

In our simulations, VR is everything. With our client’s toy drone, not only could we validate designs, but we could also tune the designs by adding weights to change the flight characteristics in VR. We could then change the physical drone design to have more or less weight in different areas to match. VR also allowed us to revalidate or fine-tune the design when we needed to incorporate late design changes in manufacturing, such as a change in battery size.

VR made it possible to test how well the drone would fly in different sized rooms with varying numbers of obstacles. The drone featured obstacle-avoidance algorithms which we could simulate and test. We used the results from virtual testing to guide decisions about how much to limit the drone’s speed, how far it should look ahead to avoid obstacles, and what action to take when an imminent collision was detected.

We simulated audio for each rotor independently to make the user feel like they were immersed in the environment. When users move the drone forward, the rotors move at slightly different paces, creating four different frequencies. And these frequencies sound different when the drone is in different places relative to the user and other objects. In our VR simulation, the user can experience all those subtle sound differences. 

With any drone, UX is key. One important part of the user experience with our client’s drone was its ability to respond to a user’s specific hand motions. Interestingly, hand tracking is a solution that is in high demand, but not many companies are offering it (13.4 percent of companies surveyed by Reuters Events said they were interested in hand tracking; just 6.5 percent of companies offering hand tracking were actively reaching out to companies). With our client’s drone, we simulated “peekaboo” behavior. The drone tracks the user’s hands and face. If the user moves the controllers near their face, this triggers the drone to display a peekaboo animation. 

Through VR, we were able to test and tune UX features quickly and inexpensively for our client.

In business, we always have to evaluate the pay off of our decisions: what’s the ROI? This is especially true when you need to decide whether to move forward with a hardware project that has complex behaviors or integrations. When you start integrating things like computer vision, IMU/motion data, and complex behaviors, you are looking at a time-intensive, high-risk project that is extremely costly. 

Now imagine compressing that timeline to get to a prototype to a matter of a month or two and then iterating on that in a matter of days or weeks. Imagine being able to not only visualize your product but also interact with it, code against it, and iterate on it, all as easily you would any software development project. That’s the power of a good digital twin. We can take our clients’ CAD files and simulate the system, allowing the client to interact with the simulated system in a virtual environment. 

Here’s a sneak peek into what we’re working on now: We are bringing the power of reinforcement learning to speed up the development of complex behaviors. We are teaching an autonomous robot to perform complex maneuvers that we can then transfer to the physical platform. We’re working on improving the pipeline from simulation to deployment so there’s even shorter development time on the actual hardware platform.

To see how RSS can help you accelerate product development and reduce risk, visit https://www.roboticsimulationservices.com/services/

You may also like:

Leave a Reply

Your email address will not be published. Required fields are marked *