Skip to content. | Skip to navigation

Personal tools

Theme for TIFR Centre For Applicable Mathematics, Bangalore

Navigation

You are here: Home / Events / High Performance Computational Astrophysics on GPUs with Applications

High Performance Computational Astrophysics on GPUs with Applications

Speaker: Sethupathy Subramanian (Univ. of Notre Dame, USA)
Speaker
Speaker: Sethupathy Subramanian (Univ. of Notre Dame, USA)
When Sep 01, 2023
from 11:00 AM to 12:00 PM
Where Ground Floor Auditorium (006) (Hybrid)
Add event to calendar vCal
iCal

Abstract: GPU computing will be an integral part of any Exascale supercomputer in the foreseeable future. It is important to explore how higher-order Godunov (HOG) schemes can be ported to such advanced supercomputer architectures. In this talk, a set of techniques will be identified that enable the efficient and rapid porting of CPU code to GPUs, specifically for HOG schemes used in hyperbolic Partial Differential Equations (PDEs). The focus of this study is on Computational Fluid Dynamics (CFD), Magnetohydrodynamics (MHD), and Computational Electromagnetics (CED), illustrating how HOG schemes can be effectively implemented on supercomputers equipped with GPUs. To express parallelism and optimize code parallelization, the OpenACC language extension is used.

The importance of algorithmic choices that work efficiently within GPU limitations is highlighted, including WENO schemes for spatial reconstruction and ADER predictor steps for space-time representation of PDEs. The implementation examples are presented in both Fortran and C/C++ code using OpenACC extensions, serving as a valuable resource for OpenACC implementations. Challenges related to small GPU caches and optimizing data movement between the host (CPU) and device (GPU) are addressed. The performance of ADER predictor steps and Runge-Kutta time-stepping for CFD and MHD on GPUs is evaluated, showcasing the benefits and limitations of each approach.

The GPU advancements are integrated into the Riemann-Geomesh MHD code, which employs cutting-edge algorithms for simulating spherical astrophysical systems. It utilizes one-sided MPI-3 communication across processors and leverages OpenACC for GPU parallelization. These enhancements enable resource-intensive simulations of magnetically channeled winds from massive stars to be executed and analyzed. The simulations employ high-resolution meshes and strong magnetic fields, necessitating small time steps and long run times on supercomputers. The highly vectorized and GPU-enabled Riemann-Geomesh MHD code achieves approximately 5 times faster execution on GPUs compared to CPUs, with near-perfect scalability up to ~100 GPU processors.

 Click here to watch the recording

Filed under: