Cut data centre costs with a new power supply architecture
Modern power supply architectures can cut percentage points from data centre electricity bills, writes Bob Cantrell, senior application engineer, Ericsson Power Modules
Today’s growing data centres consume around three per cent of global electricity supply, which equates to two per cent of the world’s greenhouse gas emissions. This carbon footprint is roughly the same as the entire airline industry.
The amount of energy data centres use is doubling every four years, and with data use on the cloud continuing to dramatically grow, this trend shows no signs of slowing.
In the US alone, data centre electricity use reached 70 billion kWh in 2014. Given the amounts involved, even small percentage reductions can save millions of dollars.
Data centres consume energy in two ways. Firstly, servers, networking equipment and mass storage devices need energy to work. Secondly, massive air conditioning systems are required to keep this equipment cool, consuming large amounts of energy. Strategies to reduce the amount of energy that data centres consume focus on increasing the efficiency of data centre equipment, reducing the requirement for cooling or, ideally, both.
Figure 1: An overview of a typical power system architecture in a data centre application
Modern strategies for reducing data centres’ reliance on cooling tackle the problem in several ways. One option is to raise the maximum ambient temperature that equipment can operate at, so that servers can safely and reliably run in a hotter environment. Intel and others have raised the ambient temperatures of their data centres above 68 to 72°F (20 to 22°C), without significantly affecting equipment reliability. For every degree (Fahrenheit) that temperature can be raised, cooling energy costs can be reduced by four per cent.
Another way of reducing cooling costs is to move data centres to colder parts of the world. Scandinavian countries such as Norway and Sweden offer a cool climate year-round. The natural availability of cool air, obtained for free, can be brought inside the data centre rather than having to cool the air in the first place. Countries like these, which depend heavily on hydroelectric and other renewable energy sources, also offer low-carbon electricity supplies. Sweden has recently reduced its tax rate on energy supplied to data centres. Facebook has a data centre in Luleå, Sweden and one in Odense, Denmark. Other companies are following suit.
Increasing hardware use
Data centres have also cut their electricity bills by switching out the lights in their facilities, unless there is a reason to have the lights on, for example, when engineers are accessing the hardware.
Figure 2: Converting directly from 48V to the point of load saves energy and board space
Improving server efficiency is another significant way operators have tried to reduce power consumption, using techniques such as server virtualisation. Virtualisation uses software to divide server hardware into multiple virtual environments, so that a number of virtual servers can be run concurrently on the same physical hardware. This is much more efficient, minimising the amount of server hardware that needs to be running at any one time, which helps reduce power consumption.
New power architectures
The technology used in power supply infrastructure is also critical to the energy efficiency of the modern data centre. Previous generations of data centres had a power room where the incoming AC line voltage was rectified to DC for distribution to the cabinets. In a typical set-up today, the AC supply charges large batteries in the data centre’s uninterruptible power supply (UPS) system. These UPS can output 380V DC, or even high voltage AC voltage for the most efficient means of distribution. AC/DC converters rectify the power, then DC/DC bus converter modules step the high voltage down to 48V DC, distributed to other parts of the system along an isolated power bus called the intermediate bus.
Intermediate bus converters step down 48V from the bus to 12V DC, which is stepped down by individual PoL converters to the exact voltage level required by the load. Individual PoL converters are needed because processors, ASICs, FPGAs or other devices in these loads have different power supply requirements, including different supply voltages and regulation requirements. The efficiency of this entire network from the AC line in to the load might be around 80 per cent. While the individual conversion stages are efficient, each stage effectively takes a small cut of the power available and dissipates it as heat. Making these stages more efficient can reduce wasted energy, thereby both lowering electricity usage and reducing the amount of cooling required, saving further energy.
Many data centres have switched from older analogue technologies to digital power modules, where the characteristics are determined by firmware. A typical digital power architecture uses a central controller to monitor and adjust the settings of individual digital power modules in the system, maximising efficiency by responding to changes in load and line conditions as they happen. Modern data centres often require multiple power rails. Another benefit is the digital modules’ smaller physical footprints, due to having fewer bulky capacitors.
Software defined power
The latest step in the evolution of digital power is software-defined power. This has the potential to increase power systems’ efficiency even further, thereby reducing cooling requirements. In this architecture, software on the controller responds to the changing response of complex loads, enabling advanced control functions such as adaptive voltage scaling (a scheme that advanced processors use to save power when not being fully utilised). The software can also respond to changes in component values due to aging. Software can increase efficiency further, tighten voltage regulation where required and improve transient response.
Another innovation set to revolutionise data centre power architectures is direct conversion. This promises a step-increase in efficiency for power systems, resulting in less wasted energy and even less of a requirement for cooling. Instead of an intermediate bus converter combined with PoL converter, a new type of power module can convert 48V directly to PoL voltages as low as 1V in a single power stage. Previously, the 48V to 12V conversion was around 96 per cent efficient, while the PoL conversion was around 90 per cent efficient. The combination of these two stages was around 86 per cent efficient. If the next generation of power conversion technology could allow a single direct conversion stage that was as efficient as today’s PoL converters, this would improve the efficiency by four percentage points.
An additional saving can be achieve by distributing power around the server at 48V rather than 12V. Power is proportional to the square of current, so reducing the current four-fold means the copper bus bars and cables carry power around the system dissipate one sixteenth of the energy they did with the same power level at the higher voltage.
Direct conversion is specified in the new server rack standard from the Open Compute Project, backed by Facebook and Google, with reported significant energy and cost savings compared to the intermediate bus architecture.
Main picture: Wind turbines produce renewable electricity in cold climates (Image courtesy of Business Sweden/The Node Pole)