diff --git a/demo/qrmi/slurm-docker-cluster/INSTALL.md b/demo/qrmi/slurm-docker-cluster/INSTALL.md index c8a8060..7802bf6 100644 --- a/demo/qrmi/slurm-docker-cluster/INSTALL.md +++ b/demo/qrmi/slurm-docker-cluster/INSTALL.md @@ -1,27 +1,25 @@ # Installation -This document describes how to setup development environment and the plugins developed in this project. +This document describes how to set up a local, container-based, Slurm development environment and how to build and install QRMI and the SPANK plugin in a Slurm cluster. +## Set Up Local Development Environment -## Setup Local Development Environment - -### Jump To: -- [Pre-requisites](#pre-requisites) +### Jump To +- [Prerequisite](#prerequisite) - [Creating Docker-based Slurm Cluster](#creating-docker-based-slurm-cluster) -- [Building and installing QRMI and SPANK Plugins](#building-and-installing-qrmi-and-spank-plugins) -- [Running examples of primitive job in Slurm Cluster](#running-examples-of-primitive-job-in-slurm-cluster) - +- [Building and Installing QRMI and the SPANK Plugin](#building-and-installing-qrmi-and-the-spank-plugin) +- [Running Primitive Job Examples in Slurm Cluster](#running-primitive-job-examples-in-slurm-cluster) +- [Running Serialized Jobs Using the QRMI Task Runner](#running-serialized-jobs-using-the-qrmi-task-runner) -### Pre-requisites +### Prerequisite -- [Podman](https://podman.io/getting-started/installation.html) or [Docker](https://docs.docker.com/get-docker/) installed. You can use [Rancher Desktop](https://rancherdesktop.io/) instead of installing Docker on your PC. +A container manager such as [Podman](https://podman.io/getting-started/installation.html), [Rancher Desktop](https://rancherdesktop.io/), or [Docker](https://docs.docker.com/get-docker/). ### Creating Docker-based Slurm Cluster -You can skip below steps if you already have Slurm Cluster for development. +#### 1. Creating your local workspace -#### 1. Creating your workspace on your PC ```bash mkdir -p cd @@ -34,6 +32,8 @@ git clone -b 0.9.0 https://github.com/giovtorres/slurm-docker-cluster.git cd slurm-docker-cluster ``` +Slurm Docker Cluster v0.9.0 uses SLURM_TAG defined in slurm-docker-cluster/.env to specify the Slurm version. Currently, SLURM_TAG is set to slurm-25-05-3-1. This corresponds to a tag in Slurm's major release 25.05 from May 2025. Using a Slurm release prior to slurm-24-05-5-1 requires rebuilding the SPANK plugin with -DPRIOR_TO_V24_05_5_1 due to interface changes in slurm-24-05-5-1. + #### 3. Cloning qiskit-community/spank-plugins and qiskit-community/qrmi ```bash @@ -50,7 +50,7 @@ popd patch -p1 < ./shared/spank-plugins/demo/qrmi/slurm-docker-cluster/file.patch ``` -Rocky Linux 9 is used as default. If you want to another operating system, apply additional patch. +Rocky Linux 9 is used as default. If you want another operating system, you must apply an additional patch (see below for CentOS 9 and CentOS 10 examples). The patch is used to avoid the Slurm Docker Cluster requirement to include its copyright notice in repositories that copy the Slurm Docker Cluster code. ##### CentOS Stream 9 @@ -69,8 +69,8 @@ patch -p1 < ./shared/spank-plugins/demo/qrmi/slurm-docker-cluster/centos10.patch ```bash docker compose build --no-cache ``` -* Need to install `docker-compose` for the `podman` users. -* For example, `brew install docker-compose` for a MAC user + +Podman users must install `docker-compose`. MacOS users can do this with `brew install docker-compose`. #### 6. Starting a cluster @@ -78,95 +78,88 @@ docker compose build --no-cache docker compose up -d ``` -> [!NOTE] -> Ensure that the following 6 containers are running on the PC. -> -> - c2 (Compute Node #2) -> - c1 (Compute Node #1) -> - slurmctld (Central Management Node) -> - slurmdbd (Slurm DB Node) -> - login (Login Node) -> - mysql (Database node) +Use `docker ps` to check that the following 6 containers are running: -Slurm Cluster is now set up as shown. +- c2 (Compute Node #2) +- c1 (Compute Node #1) +- slurmctld (Central Management Node) +- slurmdbd (Slurm DB Node) +- login (Login Node) +- mysql (Database node) + +You now have a Slurm cluster as shown below:

+## Building and Installing QRMI and the SPANK plugin -### Building and installing QRMI and SPANK Plugins - +The following steps assume you are building code on `c1` (Compute Node #1). Other nodes are also acceptable. -> [!NOTE] -> The following explanation assumes: -> - building code on `c1` node. Other nodes are also acceptable. +1. Log in to c1 - -1. Login to c1 container ```bash -% docker exec -it c1 bash +docker exec -it c1 bash ``` -2. Creating python virtual env under shared volume +2. Creating python virtual env under shared volume **on c1** ```bash -[root@c1 /]# python3.12 -m venv /shared/pyenv -[root@c1 /]# source /shared/pyenv/bin/activate -[root@c1 /]# pip install --upgrade pip +python3.12 -m venv /shared/pyenv +source /shared/pyenv/bin/activate +pip install --upgrade pip ``` -3. Building and installing [QRMI](https://github.com/qiskit-community/qrmi/blob/main/INSTALL.md) +3. Building and installing [QRMI](https://github.com/qiskit-community/qrmi/blob/main/INSTALL.md) **on c1** ```bash -[root@c1 /]# source ~/.cargo/env -[root@c1 /]# cd /shared/qrmi -[root@c1 /]# pip install -r requirements-dev.txt -[root@c1 /]# maturin build --release -[root@c1 /]# pip install /shared/qrmi/target/wheels/qrmi-*.whl +source ~/.cargo/env +cd /shared/qrmi +pip install -r requirements-dev.txt +maturin build --release +pip install /shared/qrmi/target/wheels/qrmi-*.whl ``` -4. Building [SPANK Plugin](../../../plugins/spank_qrmi/README.md) +4. Building the [SPANK plugin](../../../plugins/spank_qrmi/README.md) **on c1** ```bash -[root@c1 /]# cd /shared/spank-plugins/plugins/spank_qrmi -[root@c1 /]# mkdir build -[root@c1 /]# cd build -[root@c1 /]# cmake .. -[root@c1 /]# make +cd /shared/spank-plugins/plugins/spank_qrmi +mkdir build +cd build +cmake .. +make ``` -Which will install the QRMI from the [GitHub repo](https://github.com/qiskit-community/qrmi). -If you are building locally for development it may be easier to build the QRMI from source, mounted at `/shared/qrmi` as per this guide. +This will install QRMI from the [QRMI git repository](https://github.com/qiskit-community/qrmi). If you are building locally for development it might be easier to build QRMI from source mounted at `/shared/qrmi` as shown below: + ```bash -[root@c1 /]# cd /shared/spank-plugins/plugins/spank_qrmi -[root@c1 /]# mkdir build -[root@c1 /]# cd build -[root@c1 /]# cmake -DQRMI_ROOT=/shared/qrmi .. -[root@c1 /]# make +cd /shared/spank-plugins/plugins/spank_qrmi +mkdir build +cd build +cmake -DQRMI_ROOT=/shared/qrmi .. +make ``` - 5. Creating qrmi_config.json -Refer to [this example](https://github.com/qiskit-community/spank-plugins/blob/main/plugins/spank_qrmi/qrmi_config.json.example) and describe your environment. -Then, create a file under `/etc/slurm` or another location accessible to the Slurm daemons. +Modify [this example](https://github.com/qiskit-community/spank-plugins/blob/main/plugins/spank_qrmi/qrmi_config.json.example) to fit your environment and add it to `/etc/slurm` or another location accessible to the Slurm daemons on each compute node you intend to use. + +IBM Quantum Platform (IQP) provides limited, free access to IBM Quantum systems. After registering with IBM Cloud and IQP, the list of accessible IBM Quantum systems can be found [here](https://quantum.cloud.ibm.com/computers). The qrmi_config.json file will require an API key and a CRN for each IQP system. API key instructions can be found [here](https://cloud.ibm.com/iam/apikeys). The CRN for each IQP system can be found [here](https://quantum.cloud.ibm.com/computers). For example, click on "ibm_torino" then open the “Instance access” section for the "ibm_torino" CRN. -6. Installing SPANK Plugins +6. Installing the SPANK plugin + +Create `/etc/slurm/plugstack.conf` and ensure it has the following line (assuming `qrmi_config.json` was added to `/etc/slurm`): -Create `/etc/slurm/plugstack.conf` if not exists and add the following lines: ```bash optional /shared/spank-plugins/plugins/spank_qrmi/build/spank_qrmi.so /etc/slurm/qrmi_config.json ``` -Above example assumes you create `qrmi_config.json` under `/etc/slurm` directory. - -> [!NOTE] -> When you setup your own slurm cluster, `plugstack.conf`, `qrmi_config.json` and above plugin libraries need to be installed on the machines that execute slurmd (compute nodes) as well as on the machines that execute job allocation utilities such as salloc, sbatch, etc (login nodes). Refer [SPANK documentation](https://slurm.schedmd.com/spank.html#SECTION_CONFIGURATION) for more details. +`plugstack.conf`, `qrmi_config.json`, and `spank_qrmi.so` must be installed on the machines that execute slurmd (compute nodes) as well as on the machines that execute job allocation utilities such as salloc, sbatch, etc (login nodes). Refer to the [SPANK documentation](https://slurm.schedmd.com/spank.html#SECTION_CONFIGURATION) for more details. -7. Checking SPANK Plugins installation +7. Checking SPANK plugin installation -If you complete above step, you must see additional options of `sbatch` like below. +After completing the steps above, `sbatch --help` should show the QPU resource option as shown below: ```bash [root@c1 /]# sbatch --help @@ -175,39 +168,39 @@ Options provided by plugins: --qpu=names Comma separated list of QPU resources to use. ``` -### Running examples of primitive job in Slurm Cluster +### Running Primitive Job Examples in Slurm Cluster -1. Loging in to login node +1. Logging in to the login node ```bash -% docker exec -it login bash +docker exec -it login bash +cd /data # Or another directory shared between the login and compute nodes ``` -2. Running Sampler job +2. Running Sampler job on the **login node** ```bash -[root@login /]# sbatch /shared/spank-plugins/demo/qrmi/jobs/run_sampler.sh +sbatch /shared/spank-plugins/demo/qrmi/jobs/run_sampler.sh ``` -3. Running Estimator job +3. Running Estimator job on the **login node** ```bash -[root@login /]# sbatch /shared/spank-plugins/demo/qrmi/jobs/run_estimator.sh +sbatch /shared/spank-plugins/demo/qrmi/jobs/run_estimator.sh ``` -4. Running Pasqal job +4. Running Pasqal job on the **login node** ```bash -[root@login /]# sbatch /shared/spank-plugins/demo/qrmi/jobs/run_pulser_backend.sh +sbatch /shared/spank-plugins/demo/qrmi/jobs/run_pulser_backend.sh ``` 5. Checking primitive results -Once above scripts are completed, you must find `slurm-{job_id}.out` in the current directory. +You should find `slurm-{job_id}.out` files in the current directory. For example, -For example, ```bash -[root@login /]# cat slurm-81.out +cat slurm-81.out # Assuming job_id is 81 {'backend_name': 'test_eagle'} >>> Observable: ['IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII...', 'IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII...', @@ -232,13 +225,8 @@ For example, > Metadata: {'shots': 4096, 'target_precision': 0.015625, 'circuit_metadata': {}, 'resilience': {}, 'num_randomizations': 32} ``` -### Running serialized jobs using the qrmi_task_runner Slurm Cluster +### Running Serialized Jobs Using the QRMI Task Runner -It is possible to run JSON-serialized jobs directly using a commandline utility called qrmi_task runner. -See [the docs](https://github.com/qiskit-community/qrmi/blob/main/bin/task_runner/README.md) for that tool for details. - -```bash -[root@login /]# sbatch /shared/spank-plugins/demo/qrmi/jobs/run_task.sh -``` +It is possible to run JSON-serialized jobs directly using a commandline utility called qrmi_task_runner. See the [task_runner examples](https://github.com/qiskit-community/qrmi/python/qrmi/tools/README.md) for details. ## END OF DOCUMENT diff --git a/docs/howtos/ibmcloud_cos.md b/docs/howtos/ibmcloud_cos.md index 9c9dbff..cebcbc2 100644 --- a/docs/howtos/ibmcloud_cos.md +++ b/docs/howtos/ibmcloud_cos.md @@ -1,23 +1,20 @@ # Using IBM Cloud COS as S3 compatible storage -This document describes how to use IBM Cloud COS as S3-compatible storage, specifically how to obtain the AWS Access Key ID and Secret Access Key for use with S3-compatible tools and libraries. +This document describes how to use IBM Cloud COS as S3-compatible storage, specifically, how to obtain the AWS Access Key ID (`QRMI_IBM_DA_AWS_ACCESS_KEY_ID`), the AWS Secret Access Key (`QRMI_IBM_DA_AWS_SECRET_ACCESS_KEY`), and the S3 endpoint URL (`QRMI_IBM_DA_S3_ENDPOINT`). -## Prerequisites -* IBM Cloud COS instance +## Prerequisite -## How to obtain AWS Access Key ID and Secret Access Key +IBM Cloud Object Storage instance and bucket -- Go to the [IBM Cloud Object Storage web page](https://cloud.ibm.com/objectstorage/overview) to create an S3 instance and a bucket in your instance. -Refer [this guide](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-uhc-hmac-credentials-main&locale=en) to obtain AWS Access Key ID and Secret Access Key. +## How to obtain the AWS Access Key ID and Secret Access Key -`IBM Cloud -> Infrastructure -> Storage -> Objective Storage` in order to navigate IBM Cloud website. +To create your credentials, navigate to the `Service credentials` tab in your instance's web page. All instances can be found in the [IBM Cloud Instances web page](https://cloud.ibm.com/objectstorage/instances). Click on `New Credential` in the `Service credential` tab to create your HMAC (Hash-based Message Authentication Code) credentials. -Then, `Create Instance` and `Create Bucket` in your instance, accordingly. After the bucket is created, navigate to your instance, and click `Service credentials` and Click `New Credentials` to create your credential with HMAC. - -HMAC credentials consist of an Access Key and Secret Key paired for use with S3-compatible tools and libraries that require authentication. Users can create a set of HMAC credentials as part of a Service Credential by switching the `Include HMAC Credential` to `On` during credential creation in the console. +HMAC credentials consist of an Access Key and Secret Key paired for use with S3-compatible tools and libraries that require authentication. Users can create a set of HMAC credentials as part of a Service Credential by switching the `Include HMAC Credential` to `On` as shown below: ![include_HMAC_credential](https://cloud.ibm.com/docs-content/v4/content/3842758572478f973a02d6e5afad955eb1a777d2/cloud-object-storage/images/hmac-credential-dialog.jpg) -After the Service Credential is created, the HMAC Key is included in the `cos_hmac_keys` field like below. `access_key_id` is AWS Access Key ID and `secret_access_key` is AWS Secret Access Key. +After the Service Credential is created, the HMAC credentials are included in the `cos_hmac_keys` field as shown below. Click on the `v` on the left to expose the full Service Credential. `access_key_id` is AWS Access Key ID and `secret_access_key` is AWS Secret Access Key. ```bash { @@ -30,9 +27,17 @@ After the Service Credential is created, the HMAC Key is included in the `cos_hm "iam_apikey_description": ... ``` -## How to obtain S3 endpoint URL +You can also use the IBM Cloud CLI to create the credentials as shown below. The `access_key_id` and `secret_access_key` will be output from the command. + +```bash +ibmcloud resource service-key-create Writer --instance-name "" --parameters '{"HMAC":true}' +``` + +Refer to the [IBM Cloud documentation](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-uhc-hmac-credentials-main) for details about using HMAC credentials. + +## How to obtain the S3 endpoint URL -Service credential contains `endpoints` field. Open this URL and choose one to fit to your IBM Cloud COS instance. For example, if your instance is located in us-east region, `https://s3.us-east.cloud-object-storage.appdomain.cloud` is an endpoint for your instance. +S3 endpoints can be found in the [IBM Cloud Object Storage Regional Endpoints list](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-endpoints#endpoints-region). Choose one to fit to your IBM Cloud Object Storage instance. For example, if your instance is located in the `us-east` region then the endpoint for your instance is `https://s3.us-east.cloud-object-storage.appdomain.cloud`. -END OF DOCUMENT +## END OF DOCUMENT