You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<li><ahref="#configure-a-yorc-orchestrator">Configure a Yorc Orchestrator</a></li>
36
+
<li><ahref="#configure-an-openstack-location">Configure an OpenStack Location</a></li>
37
+
<li><ahref="#configure-a-slurm-location">Configure a Slurm Location</a>
38
+
<ul>
39
+
<li><ahref="#configure-slurm-location-to-handle-docker-workloads-with-singularity">Configure Slurm location to handle Docker workloads with singularity</a>
40
+
<ul>
41
+
<li><ahref="#limit-resources-used-by-containers">Limit resources used by containers</a></li>
42
+
</ul>
43
+
</li>
44
+
</ul>
45
+
</li>
46
+
<li><ahref="#configure-a-hosts-pool-location">Configure a Hosts Pool Location</a></li>
47
+
<li><ahref="#configure-a-google-cloud-platform-location">Configure a Google Cloud Platform Location</a></li>
48
+
<li><ahref="#configure-an-aws-location">Configure an AWS Location</a></li>
49
+
<li><ahref="#configure-a-kubernetes-location">Configure a Kubernetes Location</a></li>
40
50
</ul>
41
51
42
52
<p>In order to deploy applications and run them on a given infrastructure, Yorc must be properly configured for that infrastructure (see <ahref="https://yorc.readthedocs.io/en/stable/configuration.html#infrastructures-configuration">Infrastructure configuration</a> in Yorc documentation).</p>
@@ -125,6 +135,36 @@ <h2 id="configure-a-slurm-location">Configure a Slurm Location</h2>
125
135
126
136
<p>If no password or private key is defined, the orchestrator will attempt to use a key <strong>~/.ssh/yorc.pem</strong> that should have been defined during your Yorc server setup.</p>
127
137
138
+
<h3id="configure-slurm-location-to-handle-docker-workloads-with-singularity">Configure Slurm location to handle Docker workloads with singularity</h3>
139
+
140
+
<p>The Slurm location ships a Topology Modifier that allow to transform a Docker workload into a Singularity workload backed by a Slurm job.
141
+
This is exactly the same as what we do with the <ahref="#configure-a-kubernetes-location">Kubernetes Location</a> by transforming a Docker workload into a Kubernetes one.
142
+
This allows to port an application modeled using generic Docker components on one or the other infrastructure.</p>
143
+
144
+
<p>At the moment, the following restrictions apply:</p>
145
+
146
+
<ul>
147
+
<li>Only Docker workloads of type Jobs are supported. That means that your DockerApplication should be hosted on a ContainerRuntime itself hosted on a ContainerJobUnit</li>
148
+
<li>You can add DockerExtVolume components to mount volumes into your container. Currently we only support volumes of type <code>yorc.nodes.slurm.HostToContainerVolume</code> that means that we expect to mount a path of the actual host that will run the container. In Slurm context it is generally a path from a distributed (such as NFS) or parallel (Lustre, GPFS) filesystem or a temporary directory.</li>
149
+
<li>Resources limitations are not handled in the same way than in Kubernetes (see bellow)</li>
150
+
</ul>
151
+
152
+
<p>Go to <imgsrc="../../../../images/2.1.0/yorc/on-demand-ressource-tab.png" alt="on-demand resources" height="26px" class="inline" /> and add the following resource:</p>
153
+
154
+
<ul>
155
+
<li>yorc.nodes.slurm.ContainerJobUnit</li>
156
+
<li>yorc.nodes.slurm.ContainerRuntime</li>
157
+
<li>yorc.nodes.slurm.HostToContainerVolume</li>
158
+
</ul>
159
+
160
+
<h4id="limit-resources-used-by-containers">Limit resources used by containers</h4>
161
+
162
+
<p>When backed to Docker Kubernetes uses a concept of CPU shares to limit containers CPU consumption.
163
+
This context make no sense in Slurm context where resources are strongly isolated.
164
+
So instead of relying on <code>cpu_share</code>, <code>cpu_share_limit</code>, <code>mem_share</code> and <code>mem_share_limit</code> of <code>DockerApplication</code>s we rely
165
+
on the <code>ApplicationHost</code> capability of the <code>ContainerRuntime</code> hosting the <code>DockerApplication</code>. This capability has <code>num_cpus</code> and
166
+
<code>mem_size</code> properties that are used to request a given number of cpus and amount of memory to Slurm.</p>
167
+
128
168
<h2id="configure-a-hosts-pool-location">Configure a Hosts Pool Location</h2>
129
169
130
170
<p>Go to the locations page by clicking on <imgsrc="../../../../images/2.1.0/yorc/orchestrator-location-btn.png" alt="orchestrator location" height="26px" class="inline" /></p>
@@ -169,7 +209,7 @@ <h2 id="configure-a-google-cloud-platform-location">Configure a Google Cloud Pla
169
209
170
210
<p>Specify which image to use to initialize the boot disk, defining properties <strong>image_project</strong>, <strong>image_family</strong>, <strong>image</strong>.</p>
171
211
172
-
<p>At least one of the tuples <strong>image_project/image_family</strong>, <strong>image_project/image</strong>, <strong>family</strong>, <strong>image</strong>, should be defined:<br/>
212
+
<p>At least one of the tuples <strong>image_project/image_family</strong>, <strong>image_project/image</strong>, <strong>family</strong>, <strong>image</strong>, should be defined:
173
213
- <strong>image_project</strong> is the project against which all image and image family references will be resolved. If not specified, and either image or image_family is provided, the current default project is used.
174
214
- <strong>image_family</strong> is the family of the image that the boot disk will be initialized with. When a family is specified instead of an image, the latest non-deprecated image associated with that family is used.
175
215
- <strong>image</strong> is the image from which to initialize the boot disk. If not specified, and an image family is specified, the latest non-deprecated image associated with that family is used.</p>
@@ -245,7 +285,7 @@ <h2 id="configure-a-google-cloud-platform-location">Configure a Google Cloud Pla
245
285
246
286
<p>You can create custom or default subnet for new or existing network too as long as there is no CIDR range overlaps.</p>
247
287
248
-
<p>For private network creation, You can specify subnets in three different ways:<br/>
288
+
<p>For private network creation, You can specify subnets in three different ways:
249
289
- by checking the checkbox <strong>auto_create_subnetworks</strong> : Google will create a subnet for each region automatically with predefined IP ranges.
250
290
- by setting <strong>cidr</strong> and <strong>cidr_region</strong> : a default subnet will be created with the specified IP CIDR range in the Google specified region.
251
291
- by adding custom subnets : you can add several subnets with more accurate properties as described below.</p>
@@ -315,14 +355,15 @@ <h2 id="configure-a-kubernetes-location">Configure a Kubernetes Location</h2>
315
355
<p>Select <strong>Yorc</strong> orchestrator and go to the locations page by clicking on <imgsrc="../../../../images/2.1.0/yorc/orchestrator-location-btn.png" alt="orchestrator location" height="26px" class="inline" />.
316
356
Create a location named <strong>kubernetes</strong> (or a name of your choice) and select <strong>Kubernetes</strong> on the infrastructure type drop-down. The details page of your location should appear.</p>
317
357
318
-
<p>Go to <imgsrc="../../../../images/2.1.0/yorc/on-demand-ressource-tab.png" alt="on-demand resources" height="26px" class="inline" /> and search in the <strong>Catalog</strong> resources with type prefix <strong>org.alien4cloud.kubernetes.api.types</strong> (we’ll use <strong>k8s_api</strong> for this prefix). You have to add the following resources:</p>
358
+
<p>Go to <imgsrc="../../../../images/2.1.0/yorc/on-demand-ressource-tab.png" alt="on-demand resources" height="26px" class="inline" /> and search in the <strong>Catalog</strong> resources with type prefix <strong>org.alien4cloud.kubernetes.api.types</strong>. You have to add the following resources:</p>
319
359
320
360
<ul>
321
-
<li><strong>k8s_api.Deployment</strong></li>
322
-
<li><strong>k8s_api.Job</strong></li>
323
-
<li><strong>k8s_api.Container</strong></li>
324
-
<li><strong>k8s_api.Service</strong></li>
325
-
<li><strong>k8s_api.volume.</strong>* # the volume types needed by applications</li>
<li><strong>org.alien4cloud.kubernetes.api.types.volume.</strong>* # the volume types needed by applications</li>
326
367
</ul>
327
368
328
369
<p>Go to <imgsrc="../../../../images/2.1.0/yorc/topology-modifier-tab.png" alt="topology modifier" height="26px" class="inline" /> view to check modifiers are uploaded to your location:</p>
0 commit comments