|
| 1 | +<%- |
| 2 | + groups = OodSupport::User.new.groups.sort_by(&:id).tap { |groups| |
| 3 | + groups.unshift(groups.delete(OodSupport::Process.group)) |
| 4 | + }.map(&:name).grep(/^P./) |
| 5 | +-%> |
| 6 | +--- |
| 7 | +title: "Pitzer Rhel 9 Desktop" |
| 8 | +cluster: "pitzer-rhel9" |
| 9 | +description: | |
| 10 | + This app will launch an interactive desktop on one or more compute nodes. It is |
| 11 | + a large environment for when you need a lot of compute and/or memory resources because |
| 12 | + you will have full access to all the resources on that compute node(s). |
| 13 | + |
| 14 | + If you do not need all these resources, use the |
| 15 | + [Lightweight Desktop](/pun/sys/dashboard/batch_connect/sys/bc_desktop/vdi/session_contexts/new) |
| 16 | + app instead which is much more lightweight for general-purpose use cases. |
| 17 | +form: |
| 18 | + # everything is taken from bc_desktop/form.yml except cores is added |
| 19 | + - bc_vnc_idle |
| 20 | + - desktop |
| 21 | + - account |
| 22 | + - bc_num_hours |
| 23 | + - gpus |
| 24 | + - cores |
| 25 | + - bc_num_slots |
| 26 | + - licenses |
| 27 | + - node_type |
| 28 | + - bc_queue |
| 29 | + - bc_vnc_resolution |
| 30 | + - bc_email_on_started |
| 31 | +attributes: |
| 32 | + desktop: |
| 33 | + widget: select |
| 34 | + label: "Desktop environment" |
| 35 | + options: |
| 36 | + - ["Xfce", "xfce"] |
| 37 | + - ["Mate", "mate"] |
| 38 | + - ["Gnome", "gnome"] |
| 39 | + help: | |
| 40 | + This will launch either the [Xfce] or [Mate] desktop environment on the |
| 41 | + [Pitzer cluster]. |
| 42 | + |
| 43 | + [Xfce]: https://xfce.org/ |
| 44 | + [Mate]: https://mate-desktop.org/ |
| 45 | + [Pitzer cluster]: https://www.osc.edu/supercomputing/computing/pitzer |
| 46 | + bc_queue: null |
| 47 | + account: |
| 48 | + label: "Project" |
| 49 | + widget: select |
| 50 | + options: |
| 51 | + <%- groups.each do |group| %> |
| 52 | + - "<%= group %>" |
| 53 | + <%- end %> |
| 54 | + cores: |
| 55 | + widget: number_field |
| 56 | + value: 48 |
| 57 | + min: 1 |
| 58 | + max: 48 |
| 59 | + step: 1 |
| 60 | + gpus: |
| 61 | + widget: number_field |
| 62 | + min: 0 |
| 63 | + max: 4 |
| 64 | + licenses: |
| 65 | + value: "" |
| 66 | + widget: hidden_field |
| 67 | + node_type: |
| 68 | + widget: select |
| 69 | + label: "Node type" |
| 70 | + help: | |
| 71 | + - **Standard Compute** <br> |
| 72 | + These are standard HPC machines. There are 224 with 40 cores and |
| 73 | + 340 with 48. They all have 192 GB of RAM. Chosing any will decrease |
| 74 | + your wait time. |
| 75 | + - **GPU Enabled** <br> |
| 76 | + These are HPC machines with [NVIDIA Tesla V100 GPUs]. They have the same |
| 77 | + 40 core machines have 2 GPUs with 16 GB of RAM and 48 core machines have 2 |
| 78 | + with 32 GB of RAM. Densegpu types have 4 GPUs with 16 GB of RAM. |
| 79 | + Visualization nodes are GPU enabled nodes with an X Server in the background |
| 80 | + for 3D visualization using VirtualGL. |
| 81 | + - **Large Memory** <br> |
| 82 | + These are HPC machines with very large amounts of memory. Largmem nodes |
| 83 | + have 48 cores with 768 GB of RAM. Hugemem nodes have 80 cores with 3 TB of RAM. |
| 84 | + |
| 85 | + Visit the OSC site for more [detailed information on the Pitzer cluster]. |
| 86 | + [detailed information on the Pitzer cluster]: https://www.osc.edu/resources/technical_support/supercomputers/pitzer |
| 87 | + [NVIDIA Tesla V100 GPUs]: https://www.nvidia.com/en-us/data-center/v100/ |
| 88 | + options: |
| 89 | + - [ |
| 90 | + "any", "any", |
| 91 | + data-min-cores: 1, |
| 92 | + data-max-cores: 48, |
| 93 | + ] |
| 94 | + - [ |
| 95 | + "40 core", "any-40core", |
| 96 | + data-min-cores: 1, |
| 97 | + data-max-cores: 40, |
| 98 | + data-set-gpus: 0, |
| 99 | + ] |
| 100 | + - [ |
| 101 | + "48 core", "any-48core", |
| 102 | + data-min-cores: 1, |
| 103 | + data-max-cores: 48, |
| 104 | + data-set-gpus: 0, |
| 105 | + ] |
| 106 | + - [ |
| 107 | + "any gpu", "gpu-any", |
| 108 | + data-min-cores: 1, |
| 109 | + data-max-cores: 48, |
| 110 | + data-set-gpus: 1, |
| 111 | + ] |
| 112 | + - [ |
| 113 | + "40 core with gpu", "gpu-40core", |
| 114 | + data-min-cores: 1, |
| 115 | + data-max-cores: 40, |
| 116 | + data-set-gpus: 1, |
| 117 | + ] |
| 118 | + - [ |
| 119 | + "48 core with gpu", "gpu-48core", |
| 120 | + data-min-cores: 1, |
| 121 | + data-max-cores: 48, |
| 122 | + data-set-gpus: 1, |
| 123 | + ] |
| 124 | + - [ |
| 125 | + "densegpu", "densegpu", |
| 126 | + data-min-cores: 1, |
| 127 | + data-max-cores: 48, |
| 128 | + data-set-gpus: 4, |
| 129 | + ] |
| 130 | + - [ |
| 131 | + "visualization node", "vis", |
| 132 | + data-min-cores: 1, |
| 133 | + data-max-cores: 48, |
| 134 | + data-set-gpus: 1, |
| 135 | + ] |
| 136 | + - [ |
| 137 | + "largemem", "largemem", |
| 138 | + data-min-cores: 24, |
| 139 | + data-max-cores: 48, |
| 140 | + data-set-gpus: 0, |
| 141 | + ] |
| 142 | + - [ |
| 143 | + "hugemem", "hugemem", |
| 144 | + data-min-cores: 20, |
| 145 | + data-max-cores: 80, |
| 146 | + data-set-gpus: 0, |
| 147 | + ] |
| 148 | +submit: submit/slurm.yml.erb |
0 commit comments