“Now, in the world of data science, and for certain classes of operations, you’re often classifying or categorizing quantities or operating on a smaller set of quantities where you don’t need all 64 bits to represent the quantity.” So you want all 64 bits of double precision to represent a numeric value,” said Mallikarjun Shankar, head of the Advanced Technologies Section in the National Center for Computational Science at the US Department of Energy’s (DOE) Oak Ridge National Laboratory. “In general, when you do a simulation, you’re trying to represent the world-the locations of molecules or atoms or climate currents-in the most precise way that you can. This can result in a big speed increase for those data-driven applications. Consequently, GPU makers have been adding the ability to conduct lower precision calculations in their products, such as the NVIDIA V100 Tensor Core GPUs in Summit or the AMD Instinct™ GPUs coming in Frontier. Unlike high-fidelity simulations, data-science applications such as artificial intelligences or neural networks don’t always require the ultimate in 64-bit precision to accomplish their tasks effectively. “But within the past few years, hardware vendors have started designing special-purpose units for low-precision arithmetic in response to the machine learning community’s demand for high computing power in low-precision formats.” “Historically, HPC workloads are benchmarked at double precision, representing the accuracy requirements in computational astrophysics, computational fluid dynamics, nuclear engineering, and quantum computing,” Dongarra said. ![]() Later that year, they released the HPL-AI reference implementation to address the growing trend of supercomputers that use mixed-precision (16- or 32-bit) arithmetic in data science. At ISC 2019, the ICL team of Jack Dongarra, Piotr Luszczek, and Azzam Haidar (now a senior engineer at NVIDIA) proposed the first implementation of the HPL-AI benchmark and submitted the first entry for Summit, scoring 450 petaflops. First introduced in 1979 by Jack Dongarra, director of the Innovative Computing Laboratory (ICL) at the University of Tennessee, Knoxville, HPL has evolved over the decades along with supercomputer architectures and techniques. The main rankings of the biannual TOP500 list use the High Performance Linpack (HPL) test, the industry standard for measuring double-precision (64-bit) arithmetic performance by traditional CPU supercomputers. Modern GPUs offer very high performance for lower precision, and taking advantage of this fact is a real benefit to many applications.” ![]() “But not all of the operations in the codes need to be carried out at this level of precision all the time. ![]() So, double-precision performance is the key to determining how useful a supercomputer is for science,” said Bronson Messer, the OLCF’s director of science. “Many modern simulations require double precision to ensure that physical quantities are computed accurately, especially when those quantities are sort of pushing and pulling at once, for example, the forces acting on atoms in molecules or the fight between nuclear fusion and gravity that happens in a star. ![]() These two methods of calculating arithmetic are used for different applications in computational science, and double precision is considered the ultimate standard. On the other hand, Summit’s HPL-AI performance scores its mixed-precision compute capabilities. When operational in 2022, Frontier is expected to deliver more than 1.5 exaflops of double-precision performance. With its submitted speed of 1.15 exaflops, or a billion billion (10 18) floating point operations per second, has Summit somehow jumped into the exascale era of supercomputing ahead of the OLCF’s upcoming Frontier system? No. But it also took second place in a relatively new benchmark test apart from the main competition: High-Performance Linpack–Accelerator Introspection ( HPL-AI). Today at ISC High Performance 2021, a European virtual conference for high-performance computing (HPC), the Oak Ridge Leadership Computing Facility’s (OLCF’s) Summit was ranked as the world’s second-fastest supercomputer in the 57th TOP500 list. Summit scores well in HPL-AI, a new test that complements the classic High-Performance Linpack
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |