Memory

Application optimised SSDs – life in the digital fast lane

9th December 2022
Paige West
0

In Germany, many highways have no speed limit. It is quite common to see cars moving at a travel speed of 90mph, with the occasional driver even going faster than 150mph – that is, if weather conditions and the traffic situation permit such high-adrenaline driving.

However, on rocky backcountry roads, an off-road vehicle will get you to your destination faster than any sports car, illustrating that the ‘tool’ needs to fit the task.

In the data centre, data moves at the speed of light – i.e., considerably faster than any Audi, BMW, Mercedes, or Porsche. But traffic conditions in the data centre vary just as widely as those of real-world roads: some data can move uninhibited on multi-lane fibre highways, while others might have to constantly move back and forth along the alleyways of multi-tier application landscapes or have to cope with the perpetual rush hour traffic of data storage.

Matthias Poppel, Chief Sales & Marketing Officer (CSMO), Swissbit further discusses.

In the data centre, too, selecting the suitable equipment will determine how fast and efficiently one gets from A to B. This is why solid-state drives (SSDs) have widely replaced the hard-disk drives (HDDs): SSDs make reading data about twenty times faster than HDDs and writing data up to ten times faster.

SSDs are in a league of their own when compared with old-fashioned spinning storage media, but there can be substantial differences in SSD read/write speeds. Much of this depends on the application that accesses the data: streaming media has different needs of interacting with SSDs than interactive cloud applications or machine learning. This is one of the reasons why the big cloud providers, so-called hyperscalers, like AWS or Microsoft, invest the time, effort, and resources to configure, and sometimes even design, their own data centre equipment.

Smaller service providers and data centre operators don’t have the luxury of having servers and storage adapted to their individual specifications – they must rely on off-the-shelf hardware. This causes various challenges. For example, standard SSDs experience latency spikes while periodically running memory recovery routines that are colourfully, and fittingly, named ‘garbage collection’. These spikes can cause irritating interruptions when streaming a movie, or slow down e-commerce transactions. So being able to adjust SSD hardware to handle such issues gives the Amazon’s and Microsoft’s of this world a head start.

But for data centre operators, there is another way to tune SSDs to the demands of individual applications. Today, it is possible to combine state-of-the-art SSDs with specialised software that analyses how an application utilises them: how frequently does the app write data, and how rapidly? Do data writes occur randomly or sequentially, etc.? By analysing app behaviour, the SSD firmware can be optimised to deliver exactly what the application needs. At the same time, this avoids unnecessary wear and tear of the solid-state drives.

The three benefits of SSD optimisation

Optimising SSDs for application-specific use aims at increasing performance, but at the same time, data centre operators can achieve TCO benefits as well. Specifically, operators benefit in three ways from this innovation:

  1. Latency reduction: SSD firmware that is optimised for individual applications can reduce read/write response times by up to factor 2. In the example mentioned above, a colocation provider for a video streaming service could tweak the SSD firmware in a way that avoids garbage collection during active video streaming. This means that the colo provider could offer guaranteed response times for streaming – and turn a small technical tweak into a monetisable business benefit.
  1. Endurance: SSDs need to be replaced regularly, usually after three years. Stress tests show that by optimising SSDs for apps, their lifecycle can be extended to five years. The reason: data writes are spread out more evenly across the SSD data space. Hyperscalers use custom methods to achieve this. With application-optimised SSDs, however, the SSD will perform these adjustments continually and automatically, with no need for further intervention.
  1. Steady performance: Initial SSD read/write performance tends to decline rapidly. Usually, it will be considerably lower after only twelve to 18 months, in some cases dropping to just one third of the initial capacity. Here too, application-optimised SSDs perform much better by employing app-specific adjustments to improve data write handling: their optimised firmware will not only make the SSDs last longer but will also keep read/write performance much closer to the initial level, with a performance drop of less than 10%. Combined with the extended lifecycle, this performance boost makes app-optimised SSDs much more cost-efficient.

Gearing up for the digital race

In their highly competitive market, data centre operators need to adapt their storage equipment to match the application load as exactly as possible. When buying a car, it is easy to see that you need to pick the one that suits your individual driving style and the road conditions you are likely to face – is it the asphalt of highways and city streets, or the rocks and gravel of the wilderness? In data centre storage, the need to adjust the equipment to the task at hand is less obvious, but even more important, as the user experience of millions of customers might depend on it. Optimising the SSD firmware for individual applications gives data centre operators the pole position within their respective market segments – while faster, more durable, and more reliable solid-state drives allow them to leave the competition in the rearview mirror.

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier