Trimble explores acceleration of autonomous robot training
Deploying an autonomous robot to a new environment can be a tough proposition. How can you gain confidence that the robot’s perception capabilities are robust enough, so it performs safely and as planned?
Trimble faced this challenge when it started building plans to deploy Boston Dynamics’ robot, Spot, in a variety of indoor settings and construction environments. Trimble needed to tune the machine learning (ML) models to the exact indoor environments so that Spot could autonomously operate in these different indoor settings.
Said Aviad Almagor, division vice president of Emerging Technologies at Trimble: “As we deploy Spot equipped with our data collection sensors and field control software to indoor environments, we need to develop a cost effective and reliable workflow to train our ML-based perception models”.
“At the heart of this strategy is an ability to analyse synthetic environments. Using NVIDIA Isaac Sim on Omniverse we could seamlessly import different environments from CAD tools like Trimble SketchUp. Generating perfectly labelled ground truth synthetic data then becomes a straightforward exercise.”
To ensure that models work robustly, developers working on robotics and automation applications need diverse datasets that include all assets of the target environment. In the case of indoor environments, the list might include assets such as partitions, staircases, doors, windows, and furniture.
While these datasets can be constructed manually with real photographers and human labellers, that approach requires much preplanning and high costs and often gates when your project can start. With synthetic data, however, the user can bootstrap your ML training and get started immediately.
When building this dataset, the user could choose to include segmentation data, depth data, or bounding boxes. This perfectly labelled ground truth data can open many doors of exploration. While they are notoriously difficult to label by hand, such objects as 3D bounding boxes can be easily obtained synthetically.
In this post, we outline the steps taken to build a training workflow using synthetic data generated from simulation. Although this workflow includes sophisticated simulation and ML technology, the steps required to complete this project are simple:
- Import the environment from CAD to the NVIDIA Omniverse platform.
- Build the synthetic dataset using NVIDIA Isaac Sim on Omniverse.
- Train the ML models using the NVIDIA TAO Toolkit.
Importing the environment from Trimble SketchUp to NVIDIA Omniverse
In this project, the environment was available in Trimble SketchUp, a 3D modelling application for designing buildings. To import assets, NVIDIA Omniverse supports the USD (Universal Scene Description) format for scene description. The SketchUp model is converted to USD and imported using one of the Omniverse Connectors. (For more information, see What is NVIDIA Omniverse Connect?.)
To ensure that all the assets are properly imported, the user must inspect the environment using NVIDIA Isaac Sim or the Create or View apps in Omniverse. In some cases, this process may require a few iterations until the environment is satisfactorily represented in Omniverse.
Build the synthetic dataset using NVIDIA Isaac Sim
Synthetic data is an important tool in training ML models for computer vision applications, but again, collecting and labelling real data can be time consuming and cost-prohibitive. Moreover, collecting real training data for corner cases can sometimes be tricky – even impossible. For example, imagine trying to train an autonomous vehicle to recognise and react properly to ensure the safety of pedestrians crossing a busy street. It would be dangerous to set up a photoshoot in a crosswalk with live traffic.
As Trimble plans to deploy autonomous robots in different environments for different use cases, they faced a training data dilemma: the question of how to safely get the right training datasets for these models in a reasonable timeframe and at a reasonable cost.
The built-in synthetic data generation capabilities of NVIDIA Isaac Sim directly address this challenge. (For more information, see What is Omniverse Isaac Sim?.)
A key requirement for generating synthetic datasets is support of the right set of sensors for the ML models that are being deployed. As noted in the latter example, NVIDIA Isaac Sim supports the rendering of images with bounding boxes, depth, and segmentation, which are all important for helping a robot to perceive its surroundings. Additional sensors like LiDAR and ultrasonic sensors are also supported in NVIDIA Isaac Sim and can be useful in some robotic applications.
On top of the above, a further benefit of generating synthetic data is domain randomisation. Domain randomisation varies the parameters that define a simulated scene, such as the lighting, colour, and texture of materials in the scene. One of the main objectives is to enhance ML model training by exposing the neural network to a wide variety of domain parameters in simulation. This helps the model to generalise well when it encounters real-world scenarios. In effect, this technique helps to teach models what to ignore.
The below list runs through the randomisable parameters in NVIDIA Isaac Sim:
Train the ML models using NVIDIA TAO Toolkit
After the datasets are generated, formatting them properly to work with the NVIDIA TAO Toolkit enables you to greatly reduce the amount of time and expense of training the models while ensuring that the models are accurate and performant. The toolkit supports segmentation, classification, and object detection models.
The datasets that are synthetically generated in NVIDIA Isaac Sim are output in the KITTI format to be used seamlessly with the TAO toolkit. For more information about outputting data in NVIDIA Isaac Sim for training, see Offline Training with TLT.
When working with synthetic datasets compared to real data, the user may find the need to iterate the dataset to achieve better results. Figure 8 shows this iterative process of training with synthetic datasets.
The overall benefits of Trimble’s autonomous robot training
Trimble faced the all-too-common challenge of obtaining training data for its ML models for an autonomous robot in a cost-effective workflow. The solution to this challenge was to leverage the power of the connectors in NVIDIA Omniverse to import CAD data into the Universal Scene Description format with efficiency. The data could then be brought into NVIDIA Isaac Sim.
In the simulator, the powerful synthetic data capabilities of Isaac Sim make generating the required datasets straightforward. You can provide synthetic data to enable a more efficient training workflow and safer autonomous robot operation.