HomeServicesZeroSimDigital TwinSimulation
BlogContact

For those of you that don’t know,

Digital twins are digital versions of a physical object, a dynamic, up-to-date digital replica of a built asset or environment.

You might be thinking to yourself: “Why would a digital replica of a real-life object have any significance on the world.”

And let’s demonstrate how digital twins can help improve the built world with an example.

Imagine that you’re a construction manager for a huge firm that builds those huge Chinese skyscrapers.

Imagine you’ve built the Shanghai tower, a 128-story mega tall skyscraper in Lujiazui that’s over 2000 feet tall.

Shanghai Tower
Source

The damage that could happen if that building were to crash and fall would be tremendous.

And unless you do a daily check-up on every single pipe in the building, it’s possible that with the degradation of time, such a disaster could happen.

Now I want you to imagine that the building had sensors on the most important parts of the building.

Pipes, infrastructure, corridors, staircases, heating systems, elevators, and so on.

If you constantly get real-time data on the performance and health of these systems. It would be almost child’s play to deduce and find errors that could lead to disasters in the future.  

Compare that to the number of person-hours needed to check everything manually, and you’ve got a recipe for success.

This is just one example of what's possible with digital twins.

We’ve managed to help our clients achieve tremendous results with digital twins.

From improving warehouse productivity to helping an agriculture client select the right type of plants and weeds, we’ve done it all.

Agriculture computer vision simulation

How are digital twins built?

Digital twins are made with the help of bim (building information modeling), machine learning, artificial intelligence, and IoT (internet of things)

Data from the original asset is used to build and improve the digital twin.

By providing a precise, up-to-date model of its original, a digital twin can help designers, engineers, manufacturers, and decision-makers create more efficient structures.

Digital twins can help with everything from planning design and construction to operations and maintenance.

Consider a hospital that’s already been designed and constructed; imagine there’s a digital twin of the entire facility.

How would you use a digital twin to improve hospital infrastructure and productivity?

Well, just like before, you could use the digital twin to check for leaky pipes, faulty heating systems, and so on.

But what if you could also identify where stoppage of work happens and what cause of action leads to accidents. 

For example, you might determine that it takes a nurse too long to get medication from the cabinet for patients who desperately need it. If you saw this, you might move important medication closer to patients.

Digital Twin Example of A Robot Learning the Hospital Layout

This is just one of a thousand micro improvements you could do within your hospital. 

Just as a 1% performance improvement lead British Cycles to win the Olympic gold medals, many 1% improvements could place your hospital at the top of the charts. [Source]

On a greater scale, multiple digital twins can be integrated into an entire ecosystem.

Digital Twins Interconnected

A Short History of Digital Twins

Digital twins first appeared in the 1960s. 

Nasa was one of the first agencies to use mirroring technology to replicate systems in space,

notably, Nasa created a replica of Apollo 13, which became critical amid its challenging mission.

Apollo 13
Source

Engineers were able to test solutions on the replica to avoid further disaster. 

Later on, in 2002. Dr. Michael Greaves, chief scientist for advanced manufacturing at the Florida Institute of Technology, introduced the concept of the digital twin at an American society of mechanical engineers conference in 2002

He proposed a product lifecycle management center that contained the elements of a digital twin.

The physical space, the virtual space, and the flow of information between the two, and the manufacturing industry quickly adopted digital twins.

Architecture engineers and construction industries follow suit.

Digital Twins Today

With the help of technological advancements like bim (building information modeling), today, digital twin technology plays a massive part in the digital transformation of several industries, 

Some of those industries include:

A digital twin starts with knowledge of the assets and spaces that make up a facility.

This type of descriptive twin is a live editable version of design and construction data, such as a visual replica of assets or facilities.

An informative twin has an added layer of operational and sensory data. 

As more and more data is added, the twin becomes richer and richer and more strongly linked to its physical counterpart.

Predictive twins can leverage this operational data for insights, while comprehensive twins simulate future scenarios, and consider what-if questions.

in the future, twins will become autonomous able to learn and act on behalf of users because digital twins can gather key information about things like population growth, natural resource supply levels, and historical data on environmental disasters; they can help build more resilient cities and infrastructures as the world changes.

eventually, an entire ecosystem of digital twins will help industries respond to global challenges with powerful simultaneous changes.

right now, digital twins are helping operations and facility managers respond faster by removing the need for complex and time-consuming maintenance documents.

owners can gather information from the design and build phases to make faster business decisions lowering operational and maintenance costs.

professionals on-site can predict material and labor cycles, reducing waste and enhancing safety.

by helping professionals gain more insight into the inner workings of the world, digital twins are becoming partners in building a better future.

One of our readers asked us:

"What are some of the applications of digital twin technology?

For those who don't know, a digital twin is an electronic or virtual version of a real-world thing that is kept in sync in real-time. 

It uses data from IoT (Internet of things), which are contained in real-world objects.

This data allows engineers to monitor systems and make adjustments to the digital twin and see how the system would change in the real-world before making any expensive changes to the real-life system.

Applications of this can be found in several industries,

In this blog post, we will show you some of the most popular Digital Twin Applications.

So let's get started:

1. Quality management

With the amount of data you can acquire from IoT sensors, 

A digital twin model can help You identify all the mishaps and potential errors that need to be fixed. 

Digital Twin Application Quality Management

In addition, reworking systems in the digital twin and testing them is much faster and cheaper than real-world variants.

2. Healthcare 

A future application is a digital twin of a human. 

Providing healthcare workers with real-time analysis of the body, helping them optimize patient performance, cost, and care. 

Digital Twin Application In Healthcare
Source:

There is also an active part of it.

Digital twins can help improve the operational efficiency of hospitals by creating a digital twin of the building, care models, and staffing. 

If you can determine the points of errors or slow down, it's much easier to improve. 

3. Supply Chain Management

Digital twins are widely adapted in the logistics/supply chain industry.

They can be used for application such as: 

Enhanced shipment protection,

If you know how certain conditions will affect your delivery, you can prepare for them and ensure that the end-customer received the product intact. 

Optimizing warehouse design and operational performance.

Most warehouse owners don't know what the most effective warehouse layouts are for their company, but they can find out with a digital twin of their warehouse.

Simply testing hundreds of different layouts and their effect on real-time production will help determine warehouse managers' optimal layout.

Creating a logistics network.

If you think your current distribution route is the best, you haven't tested them in a digital twin environment, with ever-changing variables such as traffic situations, construction, road layout, and so on.

Digital Twin Application in Logistics

Companies need a digital twin of their distribution to ensure their packages get to the customer as fast as possible. 

There are many more applications of a digital twin in the logistics industry, but those could take up an entire blog post independently.

For now, we will move on to the next application.

4. Automotive industry

Developing new cars in the real world is crazy expensive; that's why most development processes are virtual.

Digital Twins can speed up this process by creating virtual models of the car and the environment you want to test the car in.

Digital Twin Application Automotive Industry
Source:

It's one thing to crash 1000 cars in real life and have millions of dollars in production to waste than crashing 1000 cars in a digital twin where the cost is only a couple of kWh of electricity. 

This is also without mentioning the safety risks testing these cars implies. 

More intense and dangerous scenarios can be created in the digital twin than can be tested in the real world. 

Another aspect is creating autonomous vehicles. 

It's necessary to test autonomous cars in the real world, but if you can prepare their artificial intelligence through machine learning in a digital twin, the real-world test will be much smoother. 

5. Process Automation

Various sensors on manufacturing lines can help create a digital twin of your production process and analyze essential performance indicators.

Digital Twin Application Process Automation

Adjustments to these indicators can be applied in the digital twin to identify new ways to optimize production and lower costs.

Conclusion

Digital Twins are the next logical step in IoT.

Almost all IoT applications can also have a digital twin application. 

There are many implemented. 

Warning: This article contains software that can speed up Robotic Simulation tenfold.

After months of development by our awesome engineers at Robotic Simulation Services, we've launched our flagship software, ZeroSim.

ZeroSim is a robotics simulation engine built on the easy-to-use Unity 3D development platform and the power of the Robotics Operating System (ROS).

Combining the power of ROS and Unity helps you make simulations the way they were supposed to be made, pain and hassle-free.

ZeroSim is designed for ease of use and rapid development of all sorts of robotics and simulation -- from warehouses and industrial settings to farming and outdoors -- from robotic arms to ground and drone-based mobile robots.

If you're ready to get started with the most potent robotic simulation software, click here:

The biggest reason our clients want us to create a robotic simulation for them is that getting access to an actual robot while developing is difficult at best and almost impossible at worst.

What's even more challenging is getting everyone in the same room to work on the robot.

Especially if your team is spread out around the world, and now due to regulations and social distancing, in some parts of the world, you wouldn't even be allowed to have your entire team in the same room.

You need a digital environment that everyone can work in, and that's what we provide for our clients with Digital Twins.

If you don't know what digital twins are, here's a quick recap: "Digital Twinning is creating an electronic or a virtual version of a real-world thing and keeping them in sync in real-time. "- If you want to learn more about digital twins, check out this article:

The best thing about having a digital twin of your environment is that you can have a small team implementing what thousands of people overseas have created. 

Let's say you're located in the US. Your best engineer might be in Japan, but you can't leverage his talents if he can't work on the local robot in the US, that is, if you don't have a digital twin. 

Development of a Control System

At RSS, we're all about rapid development and prototyping. A client recently hired us to create a control system for their robot, so we buckled up and went to work. 

Our first step was to put the robot in our flagship Robot Simulation & Development software, ZeroSIM. 

How To Create A Robotics Control System

If you haven't heard about ZeroSIM, the summary is that it uses the power of ROS combined with the modeling power of Unity. You can check it out here: ZeroSIM.

The second step was to code against the robot in the ZeroSIM simulation at home and our office; this helped speed us a lot as we didn't have to constantly visit the client's warehouse to play and work on the robot, we've done most of the work in our office.

And the third step is going to the final robot. After we've done most of the work in the digital counterpart, we're going to the warehouse ourselves, testing out the robot, and refining the details. 

Some stuff like the emergency stop button is always better tested in the real world than in the digital twin.

The exciting thing about our last client is that they weren't up for simulation right away. They had bad experiences in the past and just weren't up for it.

Thankfully, they weren't opposed to testing it out, and since our ZeroSIM software is much better than what they were using previously, they've had a fantastic experience. 

One guy was even able to go to Poland to stay with his family and work from there.

Converting a REST API into ROS Commands

The client we've recently worked with wanted to use their REST API to control the robots.

Since most of the robots are controlled with ROS (Robot Operating System), we've had to develop a converter to allow them to control the robots with their REST API. 

The client didn't have many people at the company who are experts in ROS, certainly not the number of people they needed to handle the robots; they were heavily dependent on the REST API. 

And we don't blame them. ROS is super hard to learn and has a steep learning curve. 

Another benefit of using the REST API is that the client had much more secure communication.

The big downside of ROS is that it doesn't have secure communication.

Since security was a priority for a client, we've made sure that ROS was only running inside a docker machine, and that there wasn't a way for someone to exploit ROS to breach through the client's security.

The REST API provided a centralized control unit, a place where they could easily control 50+ robots.

Why You Need to Test Your Robots in a Digital Twin

A recent client had an issue where they ordered many robots for their warehouse and wanted to develop and program them to their needs. 

They didn't use a digital twin simulation before and looked to program everything on the robots themselves.

The problem with this approach is that they almost ended up with a farm of useless robots.

The robot's hardware wasn't up to their needs, and they needed to improve and change it. This is a massive oversight that cost them millions of dollars. 

Oversights like these can easily be prevented by creating a digital twin of your warehouse beforehand. If you're interested in learning more about it, feel free to look at our digital twin case study.

And if you need a team of professionals who can create an amazing digital twin for you, feel free to contact us here: 

The Role of Simulation in Robotics


The power of simulation has revolutionized the robotic industry, supporting the design, analysis,
and results in different research and development areas.

Simulation is the process of designing a virtual model of an actual or theoretical physical system, describing its environment, and analyzing its output while varying the designated parameters.

The capability of creating a digital twin and reconstructing its surroundings without the need for a physical prototype allows companies to save time and money on their concept models.

Now, institutions no longer need to manufacture expensive, time-consuming iterative prototypes, but instead, they can generate digital data representing their desired system. Simulation allows us to study the structure, characteristics, and function of a robotic system at different levels of detail, each having different requirements.

Robotic simulation provides proof of concept and design, ensuring that flaws do not get built into automated systems.

They can be used to analyze kinematics and dynamics of robotic manipulators, construct different control algorithms, design mechanical structures, and organize
production lines.

The simulation advancements allow physically accurate digital copies to be
built and operate in real-time ray and path tracing for true-to-life visualization without
compromising accuracy.


Digital Twins - Building Your Virtual Robot


The simulation process begins by building a digital twin of a physical object or system you wish
to study.

This digital twin is an online version of the physical concept-body and will mimic its
characteristics and behaviors as if in the real world.

Collecting the data required for creating a high-fidelity digital twin involves technology such as realistic physics packages and material definition language (MDL).

Realistic physics packages equip the model with stable, fast, and
GPU-enabled physics allowing precise rigid and soft body dynamics, in addition to the
recreation of destructible environments.

MDL is used to describe a physically-based material’s property and how it is supposed to behave, permitting the assignment of appropriate material traits to their respective components.

These features give your robot robust and articulated dynamics.

Once your digital twin’s features align with their real-world counterpart, it’s vital that your online
model also behaves equivalently.

Reinforcement learning (RL) is a segment of machine learning that teaches the software agents what actions to take in their specific environment to reach a
predefined objective.

The agents learn by interacting with their environment, observing the results, and receiving a positive or negative reward.

RL’s advantages are that it eliminates the need to specify rules manually, it can retrain the agent easily to adapt to environmental changes, and it can train continuously, improving performance all the time.

This method is used in A Reinforcement Learning Neural Network for Robotic Manipulator Control, a letter published in the MIT Press by Yazhou Hu and Bailu Si, to control a rigid two-link robotic manipulation quickly and steer it toward the desired positions.

The algorithm works like this: at about halfway to the desired coordinates, the control signals change the sign to reduce the links’ speeds.

When the manipulator is near the desired positions, the control signals become very small,
stabilizing the manipulator to the end goal.

The algorithm demonstrates that RL provides an adaptive control policy for a complex dynamical system to solve manipulation tasks without prior knowledge of the dynamics and the parameters of the system, even when nonlinearities
are present.

Minimizing the Simulation-To-Real Gap


Now that you have created a digital twin, the next step is to reconstruct your digital twin’s
environment.

Many times the environment your robot will be in is an actual physical location that
you would want to replicate, such as a hospital or a warehouse.

In this case, data is collected
through photogrammetry, building information modeling (BIM), or sensors.

Some of these sensors are RGB-D and LiDAR. RGB-D is a combination of an RGB image and its corresponding depth data, allowing you to create a three-dimensional rendition of the object.
LiDAR is a light detection and ranging sensor, creating a three-dimensional picture of the world
to the robot.

Applications have been routinely developed to process Lidar data, such as
algorithms for scan matching, object detection, and mapping.

Path planning using reinforcement learning also teaches the robot where to go, avoid collisions, help identify objects, and determine where its goal destination is.

Domain randomization improves performance by showing potential changes in the robot’s atmosphere, such as texture, color, light conditions, and placement of objects to train and test the robot’s perception.

These technologies, coupled with photorealism, helps the robot to sense the virtual world as if it’s real.


Robotic Simulations Across Industries


Robotic simulation has allowed the production of more customizable, compatible, accurate, and
automated products.

The automobile industry can leverage these attributes through
multi-physics packages that support ground vehicle modeling, simulation, and visualization.

One such library is Chrono::Vehicle developed by members of the University of Wisconsin-Madison and the University of Parma and funded by the U.S. Army.

This software package is designed in a modular manner, using a template-based approach to allow rapid prototyping of existing and new vehicle configurations.

It also has large-scale vehicle-terrain-environment ability for multi-physics and multi-scale simulation. Although vehicles can be complex with intricate connectivity and precise design configurations, their systems have relatively standard topologies.

These predefined frameworks allow the developers to design modeling tools based
on a modular approach.

The modules represent the vehicle’s subsystems such as suspension,
steering, and driveline.

The template defines the essential modeling elements such as bodies,
joints, and force elements.

The template parameters are the hardpoints, joint directions, inertial
properties, and contact material properties.

Several studies can be performed with this modular approach such as standard mobility testing on rigid flat terrain, double lane change with a path-follower driver system, and constant-speed controller to find the maximum speed at which the vehicle can safely perform the maneuver.

The simulations also include a step-climbing validation test for determining the maximum obstacle height that a tracked vehicle can accomplish from rest.

Chrono::Vehicle can simulate fording maneuvers and sloshing of liquids in
vehicle-mounted tanks.

Autonomous vehicles can also be simulated such as a convoy of cars equipped with virtual LiDAR, GPS, and IMU sensors that allow the fleet to follow the car ahead.


In 2000, the US Food and Drug Administration (FDA) approved surgical robotic devices for
human surgery.

Because robotic surgery requires a different set of surgical skills from conventional surgery, robotic surgery simulators allow surgeons to be properly trained to safely
adopt this innovative technology.

Robotic simulators that can provide automated, objective performance assessments are useful for training surgeons and provides a safe environment for learning outside of the operating room.

Such a device was developed by 3D Systems, formerly Simbionix, called the RobotiX mentor which is a stand-alone simulator of the da Vinci robot, a surgical system that allows surgeons to perform complex minimally invasive surgeries.

Image taken from: https://simbionix.com/simulators/robotix-mentor/

Its software replicates the functions of the robotic arms and the surgical space. It offers complete surgeries and 53 procedure-specific exercises in a fully simulated anatomical environment with
tissue behavior.

This gives students a reproducible environment while providing complications, and emergent situations similar to those that might occur during a real operation.

Overall, robotic simulation is viewed as a modality that allows physicians to perform increasingly complex minimally invasive procedures while enhancing patient safety.

All in all, robotic simulation makes it possible to design robots, rapidly test algorithms, perform
regression testing, and train systems using realistic scenarios.

Data collection is relatively inexpensive, simulation can procedurally generate scenes, and state information is trivially available.

Reduced learning on the real robot is also highly desirable as simulations are
frequently faster than real-time while safer for both the robot and its environment.

Resources

[1] Y. Hu and B. Si, "A Reinforcement Learning Neural Network for Robotic Manipulator Control,"
in Neural Computation, vol. 30, no. 7, pp. 1983-2004, 2018.
[2] S. Radu, T. Michael, T. Alessandro and N. Dan, “Chrono::Vehicle: template-based ground
vehicle modeling and simulation,” International Journal of Vehicle Performance, 2019.
[3] D. Julian, A. Tanaka, P. Mattingly, et al. “A comparative analysis and guide to virtual reality
robotic surgical simulators,” The International Journal of Medical Robotics and Computer
Assisted Surgery, vol. 14, no. 1, 2017.

We were approached by a client in 2020 to help them create a digital twin of their robotic research facility in San Jose.

In order to gather the necessary data to create the digital twin, we captured a point cloud of the space.

The client researches many different kinds of robots, including AMR (Automated Mobile Robots) for moving goods around in a facility, as well as stationary robot arms for doing pick and place operations.

The facility includes some office space and cubicles, but is mostly made up of floor space for robot arms and AMR’s and some shelving to allow AMR’s to fetch and place products on shelves. We wanted to capture the entire facility.

The main purpose of the digital twin is primarily for the client to research computer vision and machine learning methods for training robots, so the goal was to produce a high fidelity version of the digital twin using the Unity HDRP (High Definition Rendering Pipeline).

We also wanted to produce a lower fidelity version that could be used in AltspaceVR or in a stand alone mobile app.


More Point Cloud Data Is Crucial

In order to create a digital twin, you need measurements.

You need spatial measurements so you can recreate the geometry in 3D, and you need visual references so you can create the texture maps needed to give the 3D model color.

The more measurements you take, the more accurate you can make your model.

So a team was sent to the facility with a Geoslam ZEB Go device, which was used to capture a point cloud scan of the space. The device also captured a simultaneous video that is synched to the point cloud capture.

GEOSlam-Device
GEOSlam Device

Additional reference images and video were captured using cell phones. We also had blueprints of the facility, and models for the robot arms were provided by their respective manufacturers via URDF files or other CAD models.

Five point clouds were captured using the Geoslam device, totalling 1.5 GB of point cloud data. In addition to capturing point clouds, videos and still images of the facility, our team also took measurements of some of the more critical items using a tape measure.

Point Cloud Scanning Of The Warehouse

The team spent about six hours prepping for and capturing the space. When combined and synched with the videos captured of the same areas, there was over 196 GB of data total. Four or five hours were spent combining the data and managing it after the initial scanning session.

Too Much Point Cloud Data Can Be Bad

While having a lot of data is great in terms of being able to produce an accurate model, it’s not so good when it comes to creating a simulation that can perform in real time. In order to provide a proper simulation environment, we needed to turn the point clouds and reference images into an accurate mesh with photorealistic materials so it would be usable for machine learning with computer vision.

We also needed to create accurate collision models for all of the scanned geometry. Meshes are a very poor choice when it comes to collision models, and in most simulations, the collision models are more important than the visual model. Game engines performance is determined more by the physics than by the rendering, and using meshes as collision models can dramatically affect performance. The thousands of polygons generated for a floor increases the burden on the physics engine by a huge factor.

We also probably want to do some segmentation on the data.

Points Clouds Scan

Segmentation means figuring out which parts of the data correspond to which ‘feature’.

It’s common practice to differentiate between which points in the point cloud are the floor, which are the ceiling, walls, windows and door etc.

Not only do we want to identify them, but in some cases we want to remove them.

It’s unlikely we’ll be able to scan most places without any people in them and we’re going to need to remove people who happened to walk across the room when we were scanning.

There are also a lot of things which we might want to remove so we can replace them later with better models from elsewhere, and also take advantage of repetition.

If we have a bunch of identical shelves, it’s better to model a single shelf and use that model for each instance, than to have each one use unique polygons in an automated model.


Too Much Point Cloud Data Is Ugly

Unfortunately, automated methods to produce meshes from point clouds do not produce accurate enough results for our clients purposes.

Meshes produced from point clouds don’t have crisp edges and suffer from a ‘melted wax’ effect. Meshes generated from point clouds also don’t have accurate texture maps.

And unless the crew brings the equipment into every nook and cranny, point clouds can’t see behind or under things so the geometry there can only be guessed at.

They also have a lot more polygons than are really needed. Consider a floor for example. A floor could be modeled using two triangles if it is truly flat.

A floor generated by a point cloud scan however consists of thousands of polygons. This is very bad for the physics engine!

Photogrammetry is another method of creating meshes from images, but many thousands of images are needed, a lot of computer processing time is needed, and results still suffer from the same inaccuracies as a point cloud.


How To Turn Point Cloud Scans Into Digital Twins


The key to turning the point cloud scans and image references into a digital twin turns out to be old school game level design skills.

An experienced tech artist was given the point clouds, images and reference material and asked to use them to produce the 3D model.

They used a tool called CloudCompare to combine the five different point cloud scans into a single scan that they exported into a format compatible with the 3D modeling tool Blender.

In Blender, they used the point cloud scan as a reference guide to create a 3D model of the facility using traditional CAD tools.

Because the reference model was captured using a highly accurate LIDAR based device, the resulting digital twin was also very accurate, but because the digital twin was created by hand, it is much more efficient for a physics engine to deal with than a mesh created using current automated point cloud to mesh tools.

Our tech artist was also able to assign the appropriate materials to each surface as they built the model – something which most automated point cloud conversion processes do very poorly, if at all. Our artist could select from dozens of common materials such as metal, plastic, glossy paint, matte paint, stucco, carpet, etc. and get a physically accurate visual rendering of that kind of surface.

Another thing that our artist was able to do that would be very difficult for an automated process was to create LOD (Level of Detail) versions of everything that was scanned. These are versions of the model with fewer polygons that are used when viewing from a greater distance to improve performance. Without them, the scene would take five to ten times as long to render and the frame rate would go down dramatically.

Conclusion

Point clouds are a valuable tool for helping to create digital twins of building interiors but they can’t do the job alone. Reference photos and videos are also helpful, but even using photogrammetry will not suffice to create an accurate digital twin that can be used for robotic simulations.

Consider a desk for example. We were recently asked to model a desk for simulation purposes and there is no way that a digital scan could produce a model that could be used in a ROS environment. The desk had several drawers, and a scan of the desk would never show the interior of the drawers or reveal how they functioned.

In order for the desk to work in a simulation it need to be modeled in separate pieces for each of the moving parts, and built into a hierarchy that allows the drawer to slide in an out, but not up and down or side to side.

The drawer needs to have limits set on how far in or out it can slide, how much friction is involved, how much it weighs and how the weight is distributed. The collision model needs to be accurate for the draw so a simulated robot can put something into it or take something out.

That’s why for the foreseeable future, a skilled human will need to be in the loop to create useful digital twins.

Digital Twinning is creating an electronic or a virtual version of a real-world thing and keeping them in sync in real-time. 

That applies to more than just robotics.

An example of a digital twin could be seen in self-driving cars.

Self-driving cars can benefit from having a digital twin (virtual simulation) of the environment they're in, but you can also have a digital twin of the car itself. 

It's also very dangerous to put a self-driving car you're working on in the streets. Using digital twins, you're able to develop the AI for the self-driving car in a safe environment that's as real to the outside world as possible. 

When the car is on the real road, there are real consequences if something goes wrong.

Digital twins help solve this issue by creating a virtual simulation of the car's environment and the car itself. 

Digitalization of an object

When you put the two together, you can safely try new algorithms, use machine learning, which learns by making mistakes. Everything happens in a virtual environment, so if something goes wrong. You don't have to worry about wrecking your car or harming people.  

Potential Problems With Using Digital Twins

If you're not bringing in experts on digital twins, you could quickly run into problems once you develop your AI in the real world. 

If the gravity, collision, size, or any other physics / physical feature is in your simulation, your robot will not function as intended. 

Let's say you're using Unity to create an environment. It would be best if you made an accurate environment, especially the sensors on your robot. 

If the robot sensors are off, it's not going to behave exactly the same in the real world as it did in the virtual environment. 

Your environment needs to match what you see very accurately. 

If you've ever played video games, you've undoubtedly run into "invisible walls."

An invisible wall is a boundary in a video game that limits where a player character can go in a particular area but does not appear as a physical obstacle.

Invisible walls shouldn't happen in a digital twin. If there aren't any physical obstacles, the robot needs to access the location, especially if you're training your robot to use LIDAR.

 

What is digital twinning

LIDAR is a sensor that the robot can use to take distance measurements, they're very accurate distance measurements, and that's a big way how robots navigate in the environment. 

Summary: If you're an expert on digital twins, we advise that you hire experts like us to help make your digital twin a success. Otherwise, you might spend a lot of time and money developing virtual environments that don't work in the real world.

Do You Need A "Supercomputer" To Use Digital Twinning? 

Having a strong computer will undoubtedly help you simulate as many tests as you want and increase your machine learning efforts' speed. 

However, you don't need to own one of these supercomputers, and neither does your staff. 

You can easily rent out a strong cloud machine from companies like Google and AWS to run your virtual environment.

Or if you already have a robust machine, but your employees don't, you can give your employees access to the machine via the cloud.

How Do You Use Digital Twinning In Robotics?

You can use digital twins for a wide range of purposes, including 

machine learning, diagnostics, and algorithm testing. 

Using digital twins also allows companies to experiment in a low-risk environment. 

You don't have to spend money on procurement, materials, and production, and you know much sooner if you need to make changes before moving forward. 

Most of the time, you want to add robots to your warehouse, and you don't know-how. This is where digital twins come in.

With the almost unlimited scale in a low-risk environment, you can find the perfect solution for your warehouse. 

The goal is to create the perfect robots before investing millions of dollars assembling them in your warehouse. Also, since tests are being done in a digital environment, you don't need to stop your workers from doing their job or building a separate warehouse for testing. 

You have it all in the digital world.

What is Gazebo simulation?

Gazebo is an open-source 3D robotics simulator. Gazebo simulated real-world physics in a high fidelity simulation.

It helps developers rapidly test algorithms and design robots in digital environments.

They've branded themselves as "robotic simulation made easy," but there are many tools out there that are much easier to use and help speed up the process.

Gazebo is currently the leader in robotic simulation, but other big companies like Unity and Nvidia are starting to catch up and create their robotic simulation software.

Robotics simulation is an ever-growing space. Companies are investing more and more money to improve their workflow through robotic simulation.

Robotic simulation saves a lot of time and money because it allows people to test how robots work without huge investments.

We have created our robotic simulation simulator using Unity's powerful game engine.

The great thing about Gazebo is that you can create a dynamic simulation using multiple high-performance physics engines like ODE, Simbody, Bullet, and DART.

It helps you replicate gravity, friction, torques, and any other real life conditions that could affect your simulation's success.

This is essential to your robotics development work. It would be terrible if you built a perfect robot that can't work without gravity.

Benefits Of Gazebo Simulation

Gazebo helps you integrate a multitude of sensors, and it gives you the tools to test these sensors and develop your robots to best use them.

Suppose you don't have access to robotic hardware or want to test hundreds of robots simultaneously. It's impossible without a robotic simulator like Gazebo.

Even if you have access to hardware, Gazebo is still a useful tool because it allows you to test your robotic design before implementing it in the real world. This is why companies are investing so much money into robotic simulators and digital twins. They want to increase their manufacturing processes' workflow and speed without spending too much money on hardware.

Since Gazebo is open-source software, there are also many 3rd party plugins and solutions that help you solve specific problems you might come across or speed up your workflow.

Gazebo is continuously updating, with their latest release being the Gazebo 11.

Drawbacks Of Gazebo Simulation

Gazebo doesn't have all the bells and whistles that a 3D Simulator like Nvidia Isaac or our ZeroSIM might have.

It's tough to import 3D models into Gazebo, and if you're not a 3D modeler, it might be difficult for you to find someone who can prepare the files for Gazebo.

If you're using more popular programs like Unity, it will be much easier to import these models and have more realistic testing environments.

You can ever re-create your entire warehouse in Unity.

Installing Gazebo is also a challenging task. Its windows installation has 18 steps in total, which will be difficult for someone who isn't a developer and is familiar with using code for installation.

NVIDIA Isaac SDK is the first open-source Robotic AI Development Platform with Simulation, Navigation, and Manipulation.

It’s a robust platform that helps you build smarter robots.

NVIDIA Isaac SDK heavily relies on AI.

As they put it: “AI makes it possible for robots to perceive and interact with their environments in novel ways, enabling them to perform tasks that were unthinkable—until now.”

NVIDIA Isaac SDK comes with a collection of powerful GPU-powered algorithms, frameworks, basic applications that support accelerated robotic development. It also works hand-in-hand with Isaac SIM, which allows for the development, testing, and training of robots in a virtual environment.

In short: NVIDIA Isaac SDK heavily uses GPUs to increase performance and help you run better simulations faster.

What Can NVIDIA Isaac SDK Help You With?

NVIDIA Isaac SDK can help you create, modify & simulate your entire factory, even before installing any equipment. 

There are a lot of premade pallets, cardboard boxes, shelves, totes, bins, and everything that you’d see in your standard warehouse.

It’s all out there, in the simulation. 

The great thing about the simulation is that the physics are amazingly accurate. 

You don’t want to spend months in a simulation trying to create the perfect Robot for your warehouse and then having it all crash and burn because the gravity in your simulation is different from real-life gravity.

You can simulate your parts with 3d models. Add in the weights, center of gravity, and the simulation will interact with it really close like it would in the actual manufacturing process. 

Besides doing cool simulations, you can also use NVIDIA’s AI in your simulation to add stuff like:

Create real-time AI simulations using the power of their RTX graphics cards. 

Who Can Use The NVIDIA Isaac SDK

Anyone can download the software right away by simply heading to NVIDIAs download page, but this isn’t the biggest obstacle. 

Learning new programming languages is the biggest challenge, and fortunately for all kids fresh out of college. The whole thing can be programmed inside of python using the Isaac SDK.

Before Isaac, industrial applications were generally programmed by ladder logic and more archaic types of languages. 

Seeing as this is a python based application is excellent for companies too. If you’re looking to hire your team for the job, you will be able to tap into a much greater talent pool of excellent developers since python is a much more popular language.

The development of robotic applications is still a tough job for most companies, especially when most developers relied on Gazebo. The developers of the latter are much smaller in scale compared to NVIDIA and now Unity. 

Seeing as more prominent companies are getting into Robotics shows just how powerful and useful they are. 

For accelerated robotic development, NVIDIA provided a collection that helps in development, training, and testing. Thus the complexity of robotic development was reduced to a great extent. Developers can now try the Isaac collection, which is well documented and have proper community support for robotic development.

The NVIDIA Isaac Ecosystem

NVIDIA Isaac Ecosystem

Nvidia Isaac SDK consists of several parts that work well together to create some pretty powerful simulations.

1. Isaac Engine

Isaac engines is a software framework for building modular robotic applications.

It consists of computational graphs & CUDA messaging, Visualization Tools, and Python API & Ros Bridge.

It’s used to build robotic applications based on many small components that pass messages between each other and can be customized any way you like.

2. Isaac GEMs

Isaac GEMs are a collection of GPU-powered algorithms that help accelerate the development of robotic applications.

It consists off:

3. Isaac Sim

Isaac Sim is a virtual robotics laboratory and a high-fidelity 3d world simulator that accelerates research, design, and development by reducing cost and risk.

This helps you test robots in different scenarios.

Robots can be simulated with virtual sensors (RGB, stereo, depth, LIDAR, IMU)

It consists off:

4. Isaac Apps

These are basic applications that make use of the NVIDIA Isaac SDK engine to showcase the real power of the NVIDIA Isaac SDK and help you get started quickly.

Problems NVIDIA Isaac SDK Solves

One of the biggest problems with old simulation software is that you don’t know how your automation will respond to your environment. 

NVIDIA Isaac acknowledges that you need an excellent way to simulate what your parts will do whenever you’re automating.

What this accomplishes is that it speeds up your development time because it’s clearing up the unknown unknowns.

This means that robot development is much more rapid deployment. As we mentioned before, you’re now able to get a bigger talent pool of programmers involved in your projects.

The best part of the NVIDIA Isaac SDK is using cloud computing to do all of your development.

Anyone in the world can easily buy an instance like the Amazon Elastic Compute Cloud and develop these programs remotely. This means that even people who don’t have a $2,000 graphics card.

Your average laptop should be able to do the job.

2020 Nvidia Isaac SDK Update

The 2020.1 version brought us a lot of new possibilities for Nvidia Isaac.

Here’s a summary from Nvidia’s official website:

chevron-down