IoT

Overcoming IoT latency, security and scalability issues

7th June 2019
Alex Lynn
0

The Hardware Pioneers’ gatherings are becoming a vital knowledge source for those involved in Internet of Things (IoT) endeavours. Their latest seminar, hosted in Central London in May, was entitled ‘Building Smarter IoT Products at the Edge’.

By Mark Patrick, Mouser Electronics

It proved to be a thought-provoking and insightful event, with a series of industry experts giving their views on where artificial intelligence (AI) is likely to take us in the next few years, and what implications this will have for the supporting semiconductor, board level and software technology.

Opening proceedings was Matthieu Chevrier of Texas Instruments. As he explained, back in the 1950s and 1960s it was the addition of sensors (and access to the contextual data they captured) that helped transform “dumb” machines into more aware robots. Now robotics is going through what he described (appropriating the paleontological term) as a “Cambrian Explosion.” A multitude of different application opportunities (from delivery/logistics roles through to care of the elderly) are starting to open up, and new robot models (plus robot development platforms) are emerging.

The breadth of sensors available, and their level of sophistication, is far greater than that offered to earlier robot generations. However, what is done with the data derived from these sensors needs to be better strategised. If the potential of robots is to be fully realised, they will need to closely interact with us human beings. In a manufacturing scenario, for example, rather than them just being fenced off, they should be able to work alongside human staff – directly collaborating on assigned tasks.

Pivotal to the operation of such “cobots” will be the ability to accurately determine the proximity of human work colleagues. Matthieu gave details of the array of different sensing mechanisms that might be employed for such purposes – time-of-flight (ToF), LiDAR, mmWave, etc. Likewise, delivery drones may need these kinds of sensor technologies to identify where power lines are, so they can avoid them while airborne (they probably wouldn’t be discernible using conventional image sensors).

Digesting all this acquired data and acting on it appropriately is where AI comes in. Though so far this has generally depended on cloud-based servers, it will not stay that way. In the future a large proportion of AI processing activity will need to be undertaken in situ (whether by cobots, drones or IoT nodes) – and that would be the recurring theme that all three presenters would focus on over the course of the evening, with each tackling a different aspect in turn.

Whether in relation to a robotics, driverless car or industrial monitoring scenario, sending sizeable amounts of captured data back to the cloud is simply too longwinded a process for it to be practical. Instead, low-latency operation will be mandated – so that real-time responsiveness can be benefited from. Addressing the processing overhead at the edge will call for the use of optimised hardware. Texas Instruments’ AM5749 system-on-chip (SoC) is one such example. It can take trained neural network models created using cloud-based infrastructure, and subsequently run them in resource-limited application settings at the edge.

Continuing Matthieu’s Cambrian Explosion analogy, ARM spokesperson Chris Shore pointed out that in the 26 years since the company was founded, it has been responsible (through collaboration with its semiconductor partners) for shipping a staggering 100 billion chips worldwide. This was impressive enough, but he went on to say that ARM expects, due to the ongoing proliferation of AI, that it will manage to ship another 100 billion – but this time that will happen within a period of only four years.

Like Matthieu, he questioned the validity of the current arrangement, where systems’ intelligence will reside back in the cloud. Although this has the advantage of concentrating all the data together, there are clear problems associated with it. In addition to the latency dimension already outlined, he also brought up system security and power.

The tactic he advocated was right-size compute at the right place on the network, stating that relying on the cloud was not always going to be the answer. Instead it would be better to, as he put it, “think local” – doing more decision-making at the edge whenever possible, and then shifting a much smaller quantity of data back to the cloud as required (where value might then be extracted from it). Much of the intelligence being implemented will, in his opinion, depend on access to optimised software libraries, which can take care of functions like digital signal processing (DSP) and neural network acceleration.

Chris told the audience how members of the latest generation of ARM cores (such as the Cortex-M4, with built-in DSP functionality) were highly suited to on-device compute operations and could adhere to the power consumption, space and budgetary constraints encompassed, plus deliver the necessary advanced security features. Through the company’s proprietary Platform Security Architecture, in-depth analysis can be done to uncover potential vulnerabilities – dealing with the security threats presented by malware, communication breaches and physical damage. In addition, this helps with the design/deployment of systems that possess far higher degrees of security.

A company that ARM recently made a financial investment in is Swim.ai, and it was left to employee Greg Holland to wrap things up. This San Jose-located startup has developed an open-source platform for edge-based streaming applications (such as attending to video content from remote camera units), and it is already finding traction in the smart city (traffic management, etc.) and factory automation arenas.

Because of the vast number of devices IoT networks will comprise, the scalability factor must be taken into account as the overall volume of data being acquired ramps up. Until now software at the edge has effectively been static (with just a fixed set of responses to any circumstance arising), but the industry is now looking to make this much more adaptive (with dynamic response capabilities).

Often the datasets being compiled by IoT sensor nodes are very large (for instance, the video footage generated for a single traffic intersection would be around 40Gb/day), but the actual information densities involved are low by comparison (as only a tiny fraction of this might prove important).

With immediate feedback often being needed, a more rationalised methodology must be put in place that scales to hundreds of thousands (possibly even millions) of devices and prevents operational delays.

The Swim.ai approach is as follows: Rather than having stateless services at the edge backed by a large centralised data processing/storage reserve, it consists of edge-based stateful agents (each describing a specific useful state). These agents are collected together in a peer-to-peer network. Via the application of hierarchical aggregation, information that will bring about a better understanding of the nuances of even the most complex situations can be garnered. Furthermore, this can be done while utilising infrastructure (and underused processing capacity) that is already deployed.

In summary, intelligence that has been positioned locally (as opposed to in the cloud) will mean that support for much shorter latency periods can be realised, and the systems created will have the rapid responsiveness required for making time-critical decisions. The limited power and space resources intrinsic to edge deployments mean that processing work will have to be carried out with maximum possible efficiency, and compute time will need to be kept to an absolute minimum. This will continue to drive new innovations, from both a hardware and a software standpoint.

Featured products

Product Spotlight

Upcoming Events

No events found.
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier