Analysis

NI LabVIEW Application Named Finalist in Supercomputing Analytics Challenge

3rd December 2008
ES Admin
0
National Instruments was recently announced as a finalist in the 2008 Supercomputing Conference Analytics Challenge for accomplishments in high-performance computing (HPC) with the NI LabVIEW graphical system design platform. This recognition acknowledges the most innovative solution to the more complex problems in supercomputing applications. For the competition, the National Instruments LabVIEW research and development team submitted a technical paper establishing multicore programming benchmarks in developing real-time control for the forthcoming European Extremely Large Telescope (E-ELT), which represents historic computational challenges.
“We are excited to be a finalist in this challenge because it recognises the parallel programming potential National Instruments has been developing since introducing LabVIEW more than 20 years ago,” said Dr. James Truchard, CEO, Cofounder and President of National Instruments. “In addition to acknowledging the impressive high-performance computing capabilities of LabVIEW and our work on the European Extremely Large Telescope, this honour positions National Instruments as a leader in real-time control applications. This achievement also complements the major solutions National Instruments has facilitated for the Max Planck Institute for Plasma Physics in the field of nuclear fusion and for CERN in particle acceleration, which represent two of the biggest technical challenges of our time.”

The Analytics Challenge was held in conjunction with SC08, the international conference on high-performance computing, networking, storage and analysis, Nov. 15-21 in Austin, Texas. Each year, the Analytics Challenge provides a forum for researchers and industry representatives to present solutions that embody all facets of high-performance computing, such as comprehensive computational approaches, large-data-set processing and innovative analysis and visualisation techniques.

For their Analytics Challenge submission, National Instruments engineers documented their breakthrough work with the European Southern Observatory (ESO) on the E-ELT project, which is currently in the proof-of-concept phase and, when constructed, will be the world’s largest telescope ever created. The ESO needed help to prove the viability of a commercial off-the-shelf (COTS) solution for controlling the two most complex mirrors within the E-ELT, which will have a total of five mirrors. The telescope’s primary active mirror will be 42 m in diameter and will comprise 984 hexagonal mirror segments, all of which must be in strict alignment continuously, even in windy conditions. To maintain mirror segment alignment, the control system must respond to a total of 6,000 sensor inputs and then send control signals to 3,000 actuators, and it must complete this input-output cycle up to 1,000 times per second.

To solve this problem, NI engineers used the multicore programming functionality of LabVIEW Real-Time to create a highly deterministic, hardware-in-the-loop (HIL) communication network that moves 36 MB of data per second. The benchmarks achieved included distributing control algorithms on up to eight cores simultaneously and performing a 3,000-by-6,000 matrix-vector multiplication within 0.5 ms. This meets a monumental computational challenge while maintaining the determinism required in real-time applications and breaking the 1 ms closed-loop threshold.

The team also documented its work on the even larger problem of developing control for the telescope’s 2.5 m active mirror, which will comprise a thin, flexible mirror membrane spread across 8,000 actuators. Instead of maintaining alignment, this mirror will adapt and deform to compensate for waveform aberrations caused by atmospheric disturbances. The computational requirements for controlling this mirror are nearly 15 times more complex than that of the large primary mirror. NI engineers determined that this problem could be solved only by using a state-of-the-art multicore blade system, and they tested their solution on the Dell M1000, a 16-blade system in which each blade machine features eight cores. Although the solution is still in progress, the results from the Dell system have concluded that the LabVIEW solution already has effectively distributed the control problem onto 128 cores, another groundbreaking achievement in itself.

“The leading-edge power of the Dell Precision workstation and PowerEdge servers together with the real-time and graphical programming capabilities of NI LabVIEW deliver impressive capabilities to efficiently distribute computing loads across all the nodes in HPC applications,” said Greg Weir, Senior Manager of Worldwide Business Development for Dell Precision Workstations. “The full memory and graphics potential of our workstations is realised with the key visualisation functions of LabVIEW that HPC applications require.”

Other parallel hardware that may be used to add processing power to the final E-ELT solution for the ESO includes field-programmable gate arrays (FPGAs), which LabVIEW already supports with the NI LabVIEW FPGA Module, and general-purpose graphics processing units (GPGPUs), which are being researched as a viable acceleration platform. In addition to the Dell proof of concept, a prototype in which NVIDIA’s CUDA technology enables LabVIEW has been thoroughly benchmarked with impressive computational results.

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier