Heat: scaling the Python scientific stack to HPC systems

Claudia Comito, Thomas Saupe

PyData & Scientific Libraries Stack
Python Skill Intermediate
Domain Expertise Intermediate

Memory bottleneck in scientific computing (4 minutes)

  • Limitations of single-node libraries
  • Complexity of existing workarounds: trade-offs between manual MPI programming (high developer effort) and task-parallel frameworks
  • The data-parallel alternative: performing uniform operations on distributed slices of a global tensor.

Architecture and implementation (8 minutes)

  • The DNDarray structure: Technical breakdown of the distributed n-dimensional array, which provides a global logical view while managing local physical storage across MPI ranks.
  • The split axis concept: How data is partitioned along specific dimensions (e.g., rows or columns) to optimize communication for different mathematical operations.
  • Backend synergy:
    • PyTorch as the compute engine for high-performance local tensor operations and GPU acceleration.
    • mpi4py for communication in cluster environments.
  • Hardware interoperability: Transparent execution across CPUs and GPUs, including NVIDIA (CUDA) and AMD (ROCm) accelerators.

Algorithmic building blocks for distributed memory (8 minutes)

  • Communication-aware linear algebra: Distributed matrix-matrix multiplication and its communication costs. Advanced matrix decomposition methods, such as hierarchical and randomized SVD (hSVD), for massive datasets.
  • Scalable machine learning and statistics: Example: clustering (K-Means) and Principal Component Analysis (PCA) on distributed arrays.
  • Temporal analysis using Dynamic Mode Decomposition (DMD) on large-scale scientific data like global wind speeds.

Performance and scaling efficiency (7 minutes)

  • Scaling methodologies: strong scaling (speedup for a fixed problem size) and weak scaling (efficiency as both problem size and resources grow).
  • Memory wall removal: Utilizing the cumulative RAM of many cluster nodes to process datasets that are otherwise impossible to load.
  • Case studies: Reviewing performance results from large-scale runs

Summary and project roadmap (3 minutes)

  • Key takeaways
  • Upcoming features
  • Open-source community

Claudia Comito

I work in the Large-Scale Data Science division at the Jülich Supercomputing Centre (JSC), and I lead the development of Heat, an open-source distributed tensor framework designed for high-performance data analytics. My work focuses on scaling scientific Python applications across multi-node, multi-GPU clusters.

My background is in astrophysics, I joined JSC in 2018 to co-design distributed analytics for scientific domains including aerospace and Earth system modeling. Since 2021, I have led the Heat project, focusing on technical user support, community growth, and project dissemination.

Thomas Saupe