How to overcome vision challenges while building a multi-robot mapping system

vision challenges vision challenges

In part 1 of this series, you explored why multi-robot autonomous mapping is becoming essential for large-scale facilities and what the core components of a modern multi-robot mapping system are.

But knowing what a system should do is only half the story. The real questions shake things up real fast. These include:

  • How do you build it?
  • What happens when multiple robots generate conflicting map data?
  • How do they navigate shared spaces without colliding?
  • And how do you scale from a few robots to a fleet without your architecture collapsing under complexity?

In this part 2 of this blog series, you’ll get expert insights into the engineering challenges we encountered while building a production-ready multi-robot mapping system.

Get to know the solutions that made it work, from custom map merging and namespace management to collision handling and real-world deployment.

Custom multi-robot map merging

The challenge

Each robot in a fleet generates its own local occupancy grid map. Standard ROS 2 tools lack a scalable, production-ready method for merging multiple maps into a single global map without relying on robots starting from a shared initial space.

In real-world deployments, robots may enter a facility from different doors, loading bays, or floors, making shared starting points impractical.

Our solution

We implemented a custom multi-robot map merging pipeline designed specifically to support independent robot startup and large-scale environments. The node performs several key functions:

  • Subscribes to map topics from all robots in real time
  • Aligns maps using origin and resolution information
  • Dynamically creates a larger global grid that expands as needed
  • Intelligently merges free, occupied, and unknown cells from multiple sources
  • Publishes a unified /merge_maptopic available to the entire fleet

Offline Map Merging: This additional feature ensures that the same pipeline supports merging multiple partially mapped environments after missions are completed. By loading YAML map files from a specified directory, operators can combine maps from separate runs into a single global representation.

Business impact

  • Lightweight mapping node that supports any number of robots
  • No hardcoded map size limits – the grid expands dynamically
  • Enables both real-time global mapping and offline post-mission merging
  • Produces a single unified map for saving, visualisation, and downstream applications
  • Flexible deployment: robots do not need to start close to each other or share an initial common area

Namespace-based multi-robot scaling

The challenge

Manually managing unique link names, joints, topics, and nodes for multiple robots becomes complex, error-prone, and difficult to maintain as the fleet grows. Without a systematic approach, launching a 6-robot system could mean tracking dozens of custom configurations.

Our solution

We adopted a ROS 2 namespace-based architecture where each rover is defined as per its robot name and spawn position. All topics, TF frames, and nodes are automatically scoped under the robot’s namespace. This means:

  • /robot1/scan and /robot2/scan coexist without conflict
  • TF frames like robot1/base_link and robot2/base_link remain independent
  • Each robot runs its own isolated instance of SLAM Toolbox and Nav2

Business impact

  • Rapid scaling to many robots with minimal configuration changes
  • Clean and modular system architecture that’s easy to understand
  • Minimal manual configuration reduces human error
  • Easier maintenance and deployment across robot fleets

Collision handling and safe navigation with nav2

The challenge

Without a robust navigation stack, collision handling and robot-to-robot interactions during mapping become difficult to manage, especially in dynamic environments where people, equipment, and other robots share the space.

Our solution

Fully integrating Nav2 into each robot’s stack ensures that every rover benefits from production-grade navigation capabilities:

  • Local and global cost mapsthat represent obstacles and safe zones
  • Dynamic obstacle detectionusing real-time sensor data
  • Automatic path re-planningwhen obstacles block the planned route
  • Built-in robot-to-robot collision avoidancethrough cost map updates

Business impact

  • Safer operation in shared spaces with people and equipment
  • Reliable navigation during mapping missions
  • Production-grade behaviour that transitions seamlessly from simulation to real robots

Shared space and dynamic obstacle handling

The challenge

In shared areas, one robot may detect a temporary obstacle. For instance, a person walking through a corridor while another robot in the same area may not. This can create inconsistencies in the global map if not handled properly. Should the obstacle be permanently marked? How do you maintain map consistency while respecting local safety?

Our solution

We combine three layers of intelligence:

  • Central merged map logic that updates conservatively, avoiding permanent marking of temporary obstacles
  • Per-robot Nav2 cost maps that handle local obstacle avoidance in real time
  • Real-time sensor updates that feed into each robot’s navigation stack

This ensures that temporary obstacles in common areas are handled locally by the robots that detect them, while the global map maintains a clean, permanent representation of the facility.

Business impact

  • Secure operations in shared corridors and high-traffic areas
  • Safer navigation around people and moving objects during mapping
  • Reduced mapping errors occur due to transient obstacles

The table below summarises the key engineering challenges encountered in multi-robot mapping and the solutions implemented to enable scalable fleet deployment.

Multi-Robot Navigation with Pre-Built Map

Once mapping is complete, the unified global map becomes a valuable asset for downstream applications. Our system supports multi-robot navigation using the pre-built merged map, enabling a seamless transition from exploration to deployment.

After completing multi-robot mapping, the unified map can be saved and directly loaded into Nav2 for coordinated navigation. This empowers multiple robots to:

  • Localize within the same environment using the shared map
  • Plan paths independently or cooperatively
  • Navigate autonomously without re-mapping the facility

From Simulation to Real-World Deployment: How It Works

While Gazebo is used extensively for development, validation, and stress-testing, the software stack is designed for real-world deployment from the ground up. The same components (custom map merging, namespace management, frontier exploration, and Nav2 integration) run identically on physical robots.

The only differences are:

  • Simulation-specific launch files and Gazebo plugins are excluded
  • Real sensors replace simulated ones
  • Physical robot parameters (wheel odometry, inertial measurements) are tuned
Real-world validation

This system has been successfully tested on e-con rover platforms, demonstrating:

  • Real-world compatibility with production hardware
  • Safe navigation during mapping in dynamic environments
  • Practical multi-robot mapping performance outside simulation

Such validation confirms that the architecture is a deployable solution for real facilities.

Kick-Start Your Multi-Robot Mapping Journey with e-con Systems

Building a production-ready multi-robot autonomous mapping system requires solving real engineering challenges. Every component must be designed for scale, reliability, and real-world conditions. That’s why e-con systems brought together ROS 2, Nav2, SLAM Toolbox, and custom exploration and merging logic to build a system that:

  • Enables parallel exploration for faster facility mapping
  • Produces unified global maps for digital twins and downstream applications
  • Scales cleanly from 4 to 6 to N robots
  • Transitions seamlessly from simulation to real-world deployment
  • Has been validated on e-con Systems’ rover platforms

e-con Systems has been designing, developing, and manufacturing embedded vision solutions – from custom OEM cameras to complete ODM platforms, since 2003. If you’re ready to build and deploy your own multi-robot mapping system, talk to our experts by writing to camerasolutions@e-consystems.com.

Interested in implementing this multi-robot mapping framework in your own environment?
Access the complete source code and implementation reference to get started.

We’d be happy to discuss how our cameras and compute platforms can support your autonomous robotics applications.

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Previous Post
New 15kW and 30 SMT TVS diodes deliver DO-160 level 5 Lightning Protection

New 15kW and 30 SMT TVS diodes deliver DO-160 level 5 Lightning Protection

Next Post
MCU

What is MCU-based power management – and why does your vision system need it?