Skip to content

pmodels/cc-hpc-appliances

Repository files navigation

Chameleon Cloud HPC Appliances

This repo contains the latest version which is available in the Trovi Artifact titled "MPI and Spack Based HPC Cluster".

The Trovi artifact provides a reproducible, ready-to-use HPC cluster on the Chameleon Cloud, with automated setup of nodes for immediate execution of MPI-based tasks/applications.

Introduction

Message Passing Interface (MPI) is the backbone of high-performance computing (HPC), enabling efficient scaling across thousands of processing cores.

This project provides a reproducible MPI setup on the Chameleon testbed. It deploys an MPI cluster on a configurable number of nodes. Users login to a "main" node. From there, MPI applications can be built and run across the entire cluster.

Features

  • Pre-built Images for Different Workloads: Choose from CPU-only, CUDA, or ROCm images depending on your computation needs. Each image comes pre-configured and is available in the Appliance catalog.
  • MPI Ready Cluster: Every node has MPICH, OpenMPI and Spack installed and configured, allowing you to immediately compile and run parallel applications without manual setup.
  • Shared Filesystem Support: Chameleon NFS shared filesystem can be set up to easily share data across all nodes in the cluster (available only on sites that support Chameleon Shared File System like CHI@UC).
  • Quick Start Examples: Jupyter notebooks using Python-CHI and OpenStack Heat Template for Bare Metal Instances, Openstack Heat Template for KVM are included for rapid experimentation and deployment, helping you get started with minimal effort.

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •