Artificial Intelligence (AI) and Machine Learning (ML) applications are being developed for the enterprise and consumer markets at an exponential rate, but few developers are aware that persistent memory can play a critical role in optimising access to large data sets.
AI and ML technologies create highly demanding IO (input and output) and computational performance for GPU accelerated Extract, Transform, Load (ETL). The key challenge developers must overcome is to reduce the overall time to discovery and insight within data-intensive applications. Varying IO and computational performance is driven by bandwidth and latency. Therefore, the high-performance data analytics needed by AI and ML applications can be addressed by persistent memory solutions that offer the highest bandwidth and lowest latency.
Non-Volatile Dual-Inline Memory Modules (NVDIMMs) are an ideal solution for AI and ML storage servers. Data intensive ETL and checkpointing workloads can use the persistent memory region within main memory (the NVDIMM) to operate at DRAM latencies (<100ns) and DRAM bandwidth (25.6GB/s), to increase efficiency and eliminate performance bottlenecks within AI and ML applications.
SC19 will be held between November 18th to 21st, 2019.