News & Analysis

Supermicro's DCBBS simplify and shorten buildouts of AI/IT liquid-cooled data centres

17th May 2025
Sheryl Miles
0

Supermicro is announcing Data Centre Building Block Solutions to overcome the immense complexities of outfitting liquid cooled AI factories with all critical infrastructure components, including servers, storage, networking, rack, liquid cooling, software, services, and support. As an expansion of Supermicro's System Building Block Solutions, DCBBS adopts a standardised, yet flexible solution architecture, vastly expanded in scope to handle the most demanding AI data centre training and inference workloads, enabling easier data centre planning, buildout, and operation – all while reducing cost.

"Supermicro's DCBBS enables clients to easily construct data centre infrastructure with the fastest time-to-market and time-to-online advantage, deploying as quickly as three months," said Charles Liang, President and CEO of Supermicro. "With our total solution coverage, including designing data centre layouts and network topologies, power, and battery backup-units, DCBBS simplifies and accelerates AI data centre buildouts leading to reduced costs and improved quality."

DCBBS offers packages of pre-validated data centre-level scalable units, including a 256-node AI Factory DCBBS scalable unit, designed to alleviate the burden of prolonged data centre design by providing a streamlined package of floor plans, rack elevations, bill of materials, and more. Supermicro provides comprehensive first-party services to ensure project success, starting from consultation to on-site deployment and continued on-site support. DCBBS is customisable at the system-level, rack cluster-level, and data centre-level to meet virtually any project requirements.

Along with our DLC-2 technology, DCBBS also helps customers save up to 40% power, reducing 60% data centre footprint, and decreasing 40% water consumption, all of which leads to 20% lower TCO.

AI factory data centre-level scalable unit

The need for AI infrastructure continues to scale: AI training clusters require clusters of thousands of GPUs to develop foundation models. AI inference applications are also leveraging more test-time compute capacity by running multiple inference passes with a mixture of models to deliver greater intelligence. Supermicro's AI Factory DCBBS package fully equips data centres to tackle these rising AI computational requirements.

Solutions from Supermicro include up to 256 Liquid Cooled 4U Supermicro NVIDIA HGX system nodes, each system equipped with eight NVIDIA Blackwell GPUs (2,048 GPUs in total), interconnected with up to 800Gb/s NVIDIA Quantum-X800 InfiniBand or NVIDIA Spectrum X Ethernet networking platform. The compute fabric is supported by elastically scalable tiered storage with high-performance PCIe Gen5 NVMe, TCO optimised Data Lake nodes, and resilient management system nodes for continuous uninterrupted operation.

System-level, rack-level, and data centre-level customisation

Supermicro features a modular building block approach, composed of three hierarchical levels: the system-level, rack-level, and data centre level, giving customers unparalleled design options in determining a system-level bill of materials, down to selecting individual components, including CPUs, GPUs, DIMMs, drives, and NICs. System-level customisation ensures the ability to meet specialised hardware requirements for a particular data centre workloads and applications and allows for granular fine-tuning of data centre resources.

Supermicro aids in designing rack enclosure elevation layouts to ensure optimisation for thermals and cabling, giving customers the ability to select the type of rack enclosure, including 42U, 48U, and 52U configurations.

After the initial consultation with the customer, Supermicro delivers a project proposal tailored to a given data centre power budget, performance target, or other requirements.

Supermicro DLC-2

With liquid-cooled data centres growing from less than 1% of the market to an expected 30% within a year, Supermicro is driving the industry-wide adoption of DLC by helping customers achieve the challenge of needing to build new liquid-cooled data centres that can more efficiently remove heat.

DLC provides unmatched efficiency by capturing heat directly from the individual chips, including AI GPUs running at 1,000W TDP and beyond. Liquid cooling infrastructure is planned and deployed at data centre scale, including the piping and facility-side liquid cooling tower for heat dissipation. Supermicro leads the industry in providing a total solution for direct-to-chip liquid cooling infrastructure, consisting of DLC systems, in-rack or in-row coolant distribution units, coolant distribution manifolds, cooling towards, and more.

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2025 Electronic Specifier