Meeting the demand for smarter and more connected robots
Look inside any factory or production facility today, and you’ll probably find robots hard at work. According to research conducted by the International Federation of Robotics, in the two years from 2020 to 2022, there will be another two million industrial connected robots installed in factories around the world. Florian Schmäh, Product Sales Manager Boards at Rutronik explains more.
Behind this growth, there are two significant trends; robots are becoming smarter and, increasingly, they are collaborating with their human counterparts.
As production engineering managers become more familiar with the capabilities of industrial robots, they see further opportunities for robots to take on more demanding tasks. Thanks to sensing capabilities, and intelligent systems that can learn by example, a robot’s usefulness extends beyond the repetitive, manually intensive, and labour saving role for which they first found employment.
By incorporating machine learning techniques, a robot can learn from a human operator who guides their arms and tool selections through the various steps to complete a task. Once guided, machine learning can optimise the movements for the efficiency of control. As robots become even ‘smarter’, vision processing algorithms could help robots to instruct themselves based on watching a human undertake the task.
Also, as robots become more capable, their ability to work alongside human counterparts takes them from behind a protective safety screen to be physically located with their human worker. Collaboration can be a complex set of actions that we, as humans, instinctively pick up very quickly when we work alongside another person. Visual clues, gestures and voice snippets can be subtle signs of task completion or handover, which a robot would need to learn as an indication of intent. Collaborative robots, or ‘cobots’, require significantly more sensing capabilities as a result, in addition to being able to connect to and control other items of production machinery to be effective and productive.
One of the essential attributes an industrial robot requires is that of vision. Machine vision systems have been in use within the industrial automation domain for decades for a variety of pattern recognition, image matching, and simple image processing tasks. However, today, cameras can measure dimensions of objects within an image frame, indicate object temperature, report colours, providing significantly more information than previous devices.
Deep learning neural networks, such as convolutional neural networks which suit image classification and inspection tasks, are becoming the norm for vision processing tasks. Pre-trained neural networks can recognise and identify objects exceptionally quickly, and as camera specifications improve and neural network algorithms have become more refined, the error rates have lowered considerably.
The primary challenge for any automatic optical inspection (AOI) system is throughput speed. With production lines moving at speed, the algorithm compute requirement demands a high processing capability, typically beyond ordinary microprocessors. Traditionally, graphic processing unit (GPU) or field programmable gate array (FPGA) solutions met the demand for high levels of computing capacity and processing bandwidth, but as the need becomes ever more elevated, so does the complexity and power consumption required.
A potential alternative might be to use a cloud-based data centre, where real-time on-demand computing capacity is infinitely scalable. However attractive the cloud-based options might be, the technical challenges of long latency times, varying bandwidth, and lack of a deterministic behaviour dictate that using a cloud service is not a viable option. There is also the issue of security, exposing the whole industrial automation infrastructure to threats and adversaries.
The throughput challenges of industrial vision processing are common to many other similar requirements in other markets such as automotive and security. Faced with an increasing demand for low latency, high bandwidth neural network processing, semiconductor vendors have created heavily optimised compute solutions for the task of inference. Inference, the term used to run a neural network model once trained, is compute intensive and requires memory close to the processing cores.
An example of an inference ‘engine’ further optimised for image and video processing tasks is the Intel Movidius Myriad X vision processing unit (VPU) system-on-chip (SoC). The Myriad features 16x very long instruction word (VLIW) 128-bit streaming hybrid architecture vector engines (SHAVE). It is capable of performing inferences of up to one trillion operations per second, 1 TOPS. See Figure 1 (top).
A total of eight high definition video cameras can be directly connected to the Myriad X using the 16 MIPI interfaces. Each camera stream can utilise one of 20 hardware-based accelerators for a variety of image or video frame manipulations. For example, this might involve stereo depth measurements or using Kalman filters to isolate specific object features. Memory throughput peaks at up to 450GB per second, more than adequate for any complex high throughput image processing tasks.
Despite the Myriad’s performance specifications, power consumption is no more than 3W. The Intel Movidius Myriad X is available in a variety of different sub-system configurations including, system-on-module, M2.AE key card, and PCIe card formats from suppliers such as Advantech, IEI, Aaeon, and Intel.
To simplify the design of machine vision applications, Intel created the OpenVINO (open visual inference and neural network optimisation) software toolkit. The toolkit supports a heterogeneous development and prototyping environment across microprocessors, FPGAs, GPUs, and VPUs with a common API framework. Packed with many function-specific libraries, example applications, pre-optimised kernels, and pre-trained models, OpenVINO supports many of the popular neural network frameworks in use such as Caffe and Tensorflow.
With OpenVINO and Intel Movidius Myriad X-based sub-systems, industrial robot designs can quickly prototype and deploy new robotic applications to meet the increasing demand for large, high throughput vision processing tasks. But unlike hardware concepts at a fraction of the power budget required by GPUs and other programmable devices.
Connecting your robot to any legacy network device
Industrial automation and process control is not a new concept; they have been around for over 40 years. Unfortunately, in that time, the protocols used to control machinery have become many. Also, of the industrial control and networking protocols that stood the test of time, there have been many iterations. Today, Ethernet is establishing itself as the dominant wired networking protocol, with many promising protocol enhancements such as time-sensitive networking for achieving low latency highly deterministic networking. As a consequence, the control environment that new robotic deployments may encounter may involve connecting and achieving bi-directional communication to equipment that uses a protocol such as Profinet, EtherCAT, Modbus TCP, and many others.
Thankfully, the integration task is made much easier by using one of the growing numbers of protocol conversion cards now on the market. An example is the Ixxat INpact family of protocol converter PC cards and modules available from specialist industrial networking company HMS Networks. Using its own specialist multi-network processor IC, the task of provisioning reliable communications between industrial PCs, network-attached machinery, and industrial robots is made significantly easier. Connecting Ethernet, to EtherCAT, Modbus-TCP, Profibus, or Profinet-based applications in a flexible yet reliable manner is now possible.
At the heart of the Ixxat INpact cards is the HMS Networks Anybus NP40 multi-network processor. Specifically developed for the task of protocol conversion, the NP40 processor is optimised to meet the real-time high bandwidth requirements of Ethernet and TCP/IP protocols while exhibiting low power consumption characteristics.
The capabilities of the Anybus NP40 processor can be accessed by either purchasing one of the Ixxat INpact PC cards or, for manufacturers of industrial automation equipment, incorporating the NP40 processor into their system. Either way, the NP40 represents a ready-made solution to protocol conversion on-the-fly, with latency as low as <15 microseconds.
The NP40 architecture consists of an Arm Cortex-M3 core and an FPGA fabric. The Cortex core undertakes the management of the protocol and application stacks, and the FPGA is responsible for managing the physical interfaces and the real-time switch. With its flash-based architecture, the NP40 firmware can be reprogrammed to accommodate different protocols with ease, providing a highly efficient, low latency and high bandwidth approach to supporting legacy industrial networking protocols. Using this approach, cobots and other new robotics applications can be quickly, efficiently, and reliably integrated into existing industrial automation domains.
As robotic applications become more sophisticated and closely integrated into industrial processes, the need for compute intensive resources is growing significantly. This is also against a backdrop of factory floor space being even more constrained, limiting the number of control cabinets permitted within a given area. The ability to squeeze even more technology in a given space requires that the amount of energy consumed is kept to a minimum in order to avoid thermal management challenges. Low power high-performance computing devices such as the Myriad X vision processing unit and the Anybus NP40 represent just two technologies that are already advancing the development of more sophisticated robotic systems.