Skip to content

Commit 0aa22b9

Browse files
authored
Fixing the Acrolinx for Correctness score
1 parent 35a47ee commit 0aa22b9

File tree

1 file changed

+7
-7
lines changed

1 file changed

+7
-7
lines changed

articles/cyclecloud/openpbs.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -8,10 +8,10 @@ ms.author: adjohnso
88

99
# OpenPBS
1010

11-
[//]: # (Need to link to the scheduler README on Github)
11+
[//]: # (Need to link to the scheduler README on GitHub)
1212

1313
::: moniker range="=cyclecloud-7"
14-
[OpenPBS](http://openpbs.org/) can easily be enabled on a CycleCloud cluster by modifying the "run_list" in the configuration section of your cluster definition. A PBS Professional (PBS Pro) cluster has two main parts: the 'master' node, which runs the software on a shared filesystem, and the 'execute' nodes, which mount that filesystem and run the submitted jobs. For example, a simple cluster template snippet may look like:
14+
[OpenPBS](http://openpbs.org/) can easily be enabled on a CycleCloud cluster by modifying the "run_list", in the configuration section of your cluster definition. A PBS Professional (PBS Pro) cluster has two main parts: the 'master' node, which runs the software on a shared filesystem, and the 'execute' nodes, which mount that filesystem and run the submitted jobs. For example, a simple cluster template snippet may look like:
1515

1616
``` ini
1717
[cluster my-pbspro]
@@ -50,11 +50,11 @@ These resources can be used in combination as:
5050
```bash
5151
qsub -l nodes=8:ppn=16:nodearray=hpc:machinetype=Standard_HB60rs my-simulation.sh
5252
```
53-
Which will autoscale only if the 'Standard_HB60rs' machines are specified in the 'hpc' node array.
53+
Which autoscales only if the 'Standard_HB60rs' machines are specified in the 'hpc' node array.
5454

5555
## Adding extra queues assigned to nodearrays
5656

57-
On clusters with multiple nodearrays, it's common to create separate queues to automatically route jobs to the appropriate VM type. In this example, we'll assume the following "gpu" nodearray has been defined in your cluster template:
57+
On clusters with multiple nodearrays, it's common to create separate queues to automatically route jobs to the appropriate VM type. In this example, we assume the following "gpu" nodearray is defined in your cluster template:
5858

5959
```bash
6060
[[nodearray gpu]]
@@ -80,7 +80,7 @@ After importing the cluster template and starting the cluster, the following com
8080
```
8181

8282
> [!NOTE]
83-
> The above queue definition packs all VMs in the queue into a single VM scale set to support MPI jobs. To define the queue for serial jobs and allow multiple VM Scalesets, set `ungrouped = true` for both `resources_default` and `default_chunk`. You can also set `resources_default.place = pack` if you want the scheduler to pack jobs onto VMs instead of round-robin allocation of jobs. For more information on PBS job packing, see the official [PBS Professional OSS documentation](https://www.altair.com/pbs-works-documentation/).
83+
> As shown in the example, queue definition packs all VMs in the queue into a single VM scale set to support MPI jobs. To define the queue for serial jobs and allow multiple VM Scalesets, set `ungrouped = true` for both `resources_default` and `default_chunk`. You can also set `resources_default.place = pack` if you want the scheduler to pack jobs onto VMs instead of round-robin allocation of jobs. For more information on PBS job packing, see the official [PBS Professional OSS documentation](https://www.altair.com/pbs-works-documentation/).
8484
8585
## PBS Professional Configuration Reference
8686

@@ -89,7 +89,7 @@ The following are the PBS Professional(PBS Pro) specific configuration options y
8989
| PBS Pro Options | Description |
9090
| --------------- | ----------- |
9191
| pbspro.slots | The number of slots for a given node to report to PBS Pro. The number of slots is the number of concurrent jobs a node can execute, this value defaults to the number of CPUs on a given machine. You can override this value in cases where you don't run jobs based on CPU but on memory, GPUs, etc. |
92-
| pbspro.slot_type | The name of type of 'slot' a node provides. The default is 'execute'. When a job is tagged with the hard resource `slot_type=<type>`, that job runs *only* on the machine of same slot type. It allows you to create a different software and hardware configurations per node and ensure an appropriate job is always scheduled on the correct type of node. |
92+
| pbspro.slot_type | The name of type of 'slot' a node provides. The default is 'execute'. When a job is tagged with the hard resource `slot_type=<type>`, that job runs *only* on the machine of the same slot type. It allows you to create a different software and hardware configurations per node and ensure an appropriate job is always scheduled on the correct type of node. |
9393
| pbspro.version | Default: '18.1.3-0'. This is currently the default version and *only* option to install and run. This is currently the default version and *only* option. In the future more versions of the PBS Pro software may be supported. |
9494

9595
::: moniker-end
@@ -182,7 +182,7 @@ Currently, disk size is hardcoded to `size::20g`. Here's an example of handling
182182

183183
### Autoscale and Scalesets
184184

185-
CycleCloud treats spanning and serial jobs differently in OpenPBS clusters. Spanning jobs will land on nodes that are part of the same placement group. The placement group has a particular platform meaning VirtualMachineScaleSet with SinglePlacementGroup=true) and CycleCloud will manage a named placement group for each spanned node set. Use the PBS resource `group_id` for this placement group name.
185+
CycleCloud treats spanning and serial jobs differently in OpenPBS clusters. Spanning jobs land on nodes that are part of the same placement group. The placement group has a particular platform meaning VirtualMachineScaleSet with SinglePlacementGroup=true) and CycleCloud manages a named placement group for each spanned node set. Use the PBS resource `group_id` for this placement group name.
186186

187187
The `hpc` queue appends the equivalent of `-l place=scatter:group=group_id` by using native queue defaults.
188188

0 commit comments

Comments
 (0)