Optimal configurations for real-time collaboration in the Metaverse
Much has been discussed about what the Metaverse is and how this technology may change how humans interact with each other. Today, technologies are being used in virtual reality (VR) and augmented reality (AR), enabling people to experience virtual worlds and overlaid information of real-world visuals. Erik Grundstrom, Director, FAE & Business Development, Supermicro explores further.
This article originally appeared in the Dec'22 magazine issue of Electronic Specifier Design – see ES's Magazine Archives for more featured publications.
VR and AR experiences must respond to movement by the user with very low latency, or they will become uncomfortable or worse. Typically, a user will wear some kind of headgear and, in many cases, peripheral devices that enable the selection of content or actions and may assist in navigation through a virtual world. There are several challenges in implementing a responsive and helpful VR or AR system.
The role of remote data centres
Virtual worlds may exist in remote data centres, where high bandwidth and powerful servers can keep track of thousands of users and many databases. These systems must process movements and interactions and, in the case of AR, retrieve data to overlay onto real-world imagery. These servers must have sufficient processing power, cores/threads, and fast GPUs that are able to render scenes at the required frame rate. Depending on the size and number of users and data in a virtual world, the number of cores, GPUs, and networking bandwidth must be considered. Servers with dual CPUs, multiple high-performance graphics cards, and the latest networking bandwidth to communicate with other servers are preferable for large-scale virtual worlds.
Closer to the Edge (the user), different techniques can be used to transmit information or render it away from the data centre. For example, if a new image is rendered in the data centre, the entire (compressed) image needs to be transmitted to the user's glasses or goggles. This transmission may take time and can lead to either missed frames or a delayed response when reacting to a user's movement.
The power of the Edge
One alternative is to send just the commands from the data centre server and let the Edge graphics device render the scene based on the sequence of commands. This method requires good latency, while the bandwidth may not be as critical as in the first case. The most responsive method to interacting in a VR or AR environment is to render the graphics as close to the Edge (user in this case) as possible. With the advancement in graphics performance in Edge devices, the rendering quality may be acceptable depending on the complexity and may be more interactive.
The three different types of interaction for AR and VR all require two-way communication, from the Edge to the data centre and back. Local data centres can also be used to improve latency performance. A continuum of devices, networking, and back-end servers must be architected to meet users' service-level agreements (SLAs) for virtual or augmented worlds. Rendering performance is crucial, but so is updating the visual environment based on the user's movement. Lags in the visuals that are not consistent with the user's movements create an unpleasant experience and will disappoint users.
Prepare for the (virtual) future
The database, rendering, and network performance must all be kept in sync and tuned to work with the other steps in the process. A range of servers, software, and networking are all critical for the not-so-distant future, where VR and AR may play an essential role in the lives of many and has the potential to improve society, create better products, increase knowledge, and reduce climate change.