Top factors to consider when integrating multiple cameras into embedded vision applications
Multi-camera systems offer a wide array of possibilities across industries, providing a broader and more comprehensive view of the scene. From industrial automation to surveillance and virtual reality, the integration of multiple cameras allows enhanced visual data capture and analysis.
In this blog, we’ll explore the critical factors you should take into account when integrating multiple cameras into your camera-enabled system. We’ll also delve into practical use cases, synchronisation methods, camera interfaces, and the ideal host platform for optimal performance.
Use cases: synchronised and non-synchronised
Before unravelling the key factors, let us explore the scenarios in which synchronisation and non-synchronisation are required for effective multi-camera setups.
Airport kiosk – Non-Synchronised Setup: In airport kiosks, multiple cameras serve diverse functions like iris recognition, facial recognition, and document scanning. Although each camera contributes to the system’s functionality, synchronisation is not required as their functions operate independently.
Delivery robots – Synchronised Setup: Delivery robots that need a 360° or 180° view rely on multiple cameras with stitched images. Here, camera synchronisation becomes crucial to ensure accurate data capture and enable real-time navigational decisions.
Factors to consider when integrating multi-camera systems
The factors that play a crucial role in the successful integration of multi-camera systems are:
- Number of cameras
- Synchronisation method
- Camera interface
- Host platform
Let us now look at each of the above factors in detail.
Number of cameras
The first crucial consideration is determining the number of cameras you need. This decision depends on various factors, such as the level of accuracy and details required, the nature and position of the target object, the desired field of view, and the processing power of your host platform. Opting for a higher number of cameras can provide benefits like higher resolution, reduced lens distortion, and a broader field of view.
Synchronisation plays a pivotal role in multi-camera systems to ensure that all cameras capture images or frames at the same time. There are two main synchronisation methods to consider:
Software synchronisation: For capturing static objects in controlled environments where frame-level synchronisation is not critical, software synchronisation is recommended. This method relies on software algorithms to align captured frames, providing a practical solution for scenarios where precise synchronisation is not essential.
Hardware synchronisation: For capturing moving objects with improved accuracy and synchronisation, hardware synchronisation is recommended. This method involves initiating simultaneous image capture in all cameras through a hardware trigger, like an external PWM signal. This ensures perfectly aligned frames and is ideal for embedded vision applications requiring real-time decisions.
Application scenarios of software and hardware synchronisation: To further illustrate these concepts, let’s consider a couple of examples. An example of the software synchronisation method would be a 3D scanning device used for the reconstruction of objects. In this case, the target object doesn’t move, and the task is typically carried out in a controlled lighting environment. Here, the software-based synchronisation method will work effectively.
On the other hand, examples of hardware synchronisation include surround view systems in fleets and remotely controlled vehicles, as well as automated sports broadcasting and analytics systems. These applications require precise synchronisation to capture simultaneous and accurately aligned frames from multiple cameras.
The camera interface is another crucial factor to ponder, primarily driven by bandwidth requirements. In applications where high-resolution images need to be captured at high frame rates, a MIPI interface is preferred over USB. Additionally, factors like data transfer distance, reliability, and host platform compatibility also influence the choice. For high-bandwidth data transfer beyond 2 to 3 meters, consider interfaces like GMSL2 or FPD-Link III.
Selecting the appropriate host platform is vital for optimal performance and compatibility with your multi-camera system. Consider factors such as processing power, form factor, and cost when making this decision. Various processors are available in the market, with the NVIDIA Jetson series being a popular and advanced option. Other choices include NXP i.MX series, Qualcomm, and Texas Instruments. When selecting a processor, it is crucial to consider various factors, including AI performance measured in TOPS, power consumption, thermal performance, maximum camera support, required interfaces, and software ecosystems.