How Simulation Provides the Essential Features for Building Virtual Robotic Worlds

The Role of Simulation in Robotics


The power of simulation has revolutionized the robotic industry, supporting the design, analysis,
and results in different research and development areas.

Simulation is the process of designing a virtual model of an actual or theoretical physical system, describing its environment, and analyzing its output while varying the designated parameters.

The capability of creating a digital twin and reconstructing its surroundings without the need for a physical prototype allows companies to save time and money on their concept models.

Now, institutions no longer need to manufacture expensive, time-consuming iterative prototypes, but instead, they can generate digital data representing their desired system. Simulation allows us to study the structure, characteristics, and function of a robotic system at different levels of detail, each having different requirements.

Robotic simulation provides proof of concept and design, ensuring that flaws do not get built into automated systems.

They can be used to analyze kinematics and dynamics of robotic manipulators, construct different control algorithms, design mechanical structures, and organize
production lines.

The simulation advancements allow physically accurate digital copies to be
built and operate in real-time ray and path tracing for true-to-life visualization without
compromising accuracy.


Digital Twins - Building Your Virtual Robot


The simulation process begins by building a digital twin of a physical object or system you wish
to study.

This digital twin is an online version of the physical concept-body and will mimic its
characteristics and behaviors as if in the real world.

Collecting the data required for creating a high-fidelity digital twin involves technology such as realistic physics packages and material definition language (MDL).

Realistic physics packages equip the model with stable, fast, and
GPU-enabled physics allowing precise rigid and soft body dynamics, in addition to the
recreation of destructible environments.

MDL is used to describe a physically-based material’s property and how it is supposed to behave, permitting the assignment of appropriate material traits to their respective components.

These features give your robot robust and articulated dynamics.

Once your digital twin’s features align with their real-world counterpart, it’s vital that your online
model also behaves equivalently.

Reinforcement learning (RL) is a segment of machine learning that teaches the software agents what actions to take in their specific environment to reach a
predefined objective.

The agents learn by interacting with their environment, observing the results, and receiving a positive or negative reward.

RL’s advantages are that it eliminates the need to specify rules manually, it can retrain the agent easily to adapt to environmental changes, and it can train continuously, improving performance all the time.

This method is used in A Reinforcement Learning Neural Network for Robotic Manipulator Control, a letter published in the MIT Press by Yazhou Hu and Bailu Si, to control a rigid two-link robotic manipulation quickly and steer it toward the desired positions.

The algorithm works like this: at about halfway to the desired coordinates, the control signals change the sign to reduce the links’ speeds.

When the manipulator is near the desired positions, the control signals become very small,
stabilizing the manipulator to the end goal.

The algorithm demonstrates that RL provides an adaptive control policy for a complex dynamical system to solve manipulation tasks without prior knowledge of the dynamics and the parameters of the system, even when nonlinearities
are present.

Minimizing the Simulation-To-Real Gap


Now that you have created a digital twin, the next step is to reconstruct your digital twin’s
environment.

Many times the environment your robot will be in is an actual physical location that
you would want to replicate, such as a hospital or a warehouse.

In this case, data is collected
through photogrammetry, building information modeling (BIM), or sensors.

Some of these sensors are RGB-D and LiDAR. RGB-D is a combination of an RGB image and its corresponding depth data, allowing you to create a three-dimensional rendition of the object.
LiDAR is a light detection and ranging sensor, creating a three-dimensional picture of the world
to the robot.

Applications have been routinely developed to process Lidar data, such as
algorithms for scan matching, object detection, and mapping.

Path planning using reinforcement learning also teaches the robot where to go, avoid collisions, help identify objects, and determine where its goal destination is.

Domain randomization improves performance by showing potential changes in the robot’s atmosphere, such as texture, color, light conditions, and placement of objects to train and test the robot’s perception.

These technologies, coupled with photorealism, helps the robot to sense the virtual world as if it’s real.


Robotic Simulations Across Industries


Robotic simulation has allowed the production of more customizable, compatible, accurate, and
automated products.

The automobile industry can leverage these attributes through
multi-physics packages that support ground vehicle modeling, simulation, and visualization.

One such library is Chrono::Vehicle developed by members of the University of Wisconsin-Madison and the University of Parma and funded by the U.S. Army.

This software package is designed in a modular manner, using a template-based approach to allow rapid prototyping of existing and new vehicle configurations.

It also has large-scale vehicle-terrain-environment ability for multi-physics and multi-scale simulation. Although vehicles can be complex with intricate connectivity and precise design configurations, their systems have relatively standard topologies.

These predefined frameworks allow the developers to design modeling tools based
on a modular approach.

The modules represent the vehicle’s subsystems such as suspension,
steering, and driveline.

The template defines the essential modeling elements such as bodies,
joints, and force elements.

The template parameters are the hardpoints, joint directions, inertial
properties, and contact material properties.

Several studies can be performed with this modular approach such as standard mobility testing on rigid flat terrain, double lane change with a path-follower driver system, and constant-speed controller to find the maximum speed at which the vehicle can safely perform the maneuver.

The simulations also include a step-climbing validation test for determining the maximum obstacle height that a tracked vehicle can accomplish from rest.

Chrono::Vehicle can simulate fording maneuvers and sloshing of liquids in
vehicle-mounted tanks.

Autonomous vehicles can also be simulated such as a convoy of cars equipped with virtual LiDAR, GPS, and IMU sensors that allow the fleet to follow the car ahead.


In 2000, the US Food and Drug Administration (FDA) approved surgical robotic devices for
human surgery.

Because robotic surgery requires a different set of surgical skills from conventional surgery, robotic surgery simulators allow surgeons to be properly trained to safely
adopt this innovative technology.

Robotic simulators that can provide automated, objective performance assessments are useful for training surgeons and provides a safe environment for learning outside of the operating room.

Such a device was developed by 3D Systems, formerly Simbionix, called the RobotiX mentor which is a stand-alone simulator of the da Vinci robot, a surgical system that allows surgeons to perform complex minimally invasive surgeries.

Image taken from: https://simbionix.com/simulators/robotix-mentor/

Its software replicates the functions of the robotic arms and the surgical space. It offers complete surgeries and 53 procedure-specific exercises in a fully simulated anatomical environment with
tissue behavior.

This gives students a reproducible environment while providing complications, and emergent situations similar to those that might occur during a real operation.

Overall, robotic simulation is viewed as a modality that allows physicians to perform increasingly complex minimally invasive procedures while enhancing patient safety.

All in all, robotic simulation makes it possible to design robots, rapidly test algorithms, perform
regression testing, and train systems using realistic scenarios.

Data collection is relatively inexpensive, simulation can procedurally generate scenes, and state information is trivially available.

Reduced learning on the real robot is also highly desirable as simulations are
frequently faster than real-time while safer for both the robot and its environment.

Resources

[1] Y. Hu and B. Si, "A Reinforcement Learning Neural Network for Robotic Manipulator Control,"
in Neural Computation, vol. 30, no. 7, pp. 1983-2004, 2018.
[2] S. Radu, T. Michael, T. Alessandro and N. Dan, “Chrono::Vehicle: template-based ground
vehicle modeling and simulation,” International Journal of Vehicle Performance, 2019.
[3] D. Julian, A. Tanaka, P. Mattingly, et al. “A comparative analysis and guide to virtual reality
robotic surgical simulators,” The International Journal of Medical Robotics and Computer
Assisted Surgery, vol. 14, no. 1, 2017.

You may also like:

Leave a Reply

Your email address will not be published. Required fields are marked *