Skip to content

Commit 6e3bf62

Browse files
committed
FIX - AI Guides - spelling, grammar, punctuation
1 parent d0c3f41 commit 6e3bf62

File tree

105 files changed

+818
-818
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

105 files changed

+818
-818
lines changed

pages/public_cloud/ai_machine_learning/deploy_guide_06_billing_concept/guide.de-de.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -17,11 +17,11 @@ OVHcloud AI Deploy service provides easiness in AI models and application deploy
1717

1818
## Introduction
1919

20-
AI Deploy is linked to a Public Cloud project. The whole project is billed at the end of the month, with pay-as-you-go. This means you will only pay for what you consume, based on the compute resources you use (CPus and GPUs) and their running time. At this time, we do not support a "pay per call" pricing.
20+
AI Deploy is linked to a Public Cloud project. The whole project is billed at the end of the month, with pay-as-you-go. This means you will only pay for what you consume, based on the compute resources you use (CPUs and GPUs) and their running time. At this time, we do not support a "pay per call" pricing.
2121

2222
## AI Deploy apps lifecycle
2323

24-
OVHcloud AI deploy allows deployment of Docker images, and each deployment is called an `app`.
24+
OVHcloud AI Deploy allows deployment of Docker images, and each deployment is called an `app`.
2525
During its lifetime, the app will go through the following status:
2626

2727
- `QUEUED`: the app deployment request is about to be processed.
@@ -99,10 +99,10 @@ We deploy one AI Deploy app, with 2 x GPUs and we keep it running for 10 hours t
9999

100100
We receive thousands of calls: it's included (no pay per call provided, you pay running compute).
101101

102-
- compute resources per replica : 2 x GPU NVIDIA V100s (1,95€ / hour)
103-
- scaling : fixed
104-
- replicas : 1 only
105-
- amount of calls : unlimited
102+
- compute resources per replica: 2 x GPU NVIDIA V100s (1,95€ / hour)
103+
- scaling: fixed
104+
- replicas: 1 only
105+
- amount of calls: unlimited
106106
- duration: 10 hours then deleted
107107

108108
Price calculation for compute: 10 (hours) x 2 (GPU) x 1 (replica) x 1,93€ (price / GPU) = **39 euros**, billed at the end of the month.
@@ -114,9 +114,9 @@ We start 15 x AI Deploy apps in parallel, each of them with one vCPU.
114114
We receive thousands of calls: it's included (no pay per call provided, you pay running compute).
115115

116116
- compute resources per app with fixed scaling: 1 x vCPU (0,03€ /hour /cpu)
117-
- scaling : fixed
118-
- replica : 1 only
119-
- amount of calls : unlimited
117+
- scaling: fixed
118+
- replica: 1 only
119+
- amount of calls: unlimited
120120
- duration: 5 hours then deleted
121121

122122
Price calculation for compute: 15 (app) x 5 (hours) x 1 (CPU) x 0,03€ (price / CPU) = **2,25 euros**, billed at the end of the month.
@@ -127,9 +127,9 @@ We start 1 x AI Deploy app with autoscaling configured to 1 replica minimum, and
127127

128128
We receive thousands of calls: it's included (no pay per call provided, you pay running compute).
129129

130-
- compute resources per replica : 1 x GPU (1,93€ /hour /gpu)
131-
- scaling : auto-scaling, from 1 to 3 replicas
132-
- amount of calls : unlimited
130+
- compute resources per replica: 1 x GPU (1,93€ /hour /gpu)
131+
- scaling: auto-scaling, from 1 to 3 replicas
132+
- amount of calls: unlimited
133133
- duration: 5 hours with 1 replica running, then a peak with 1 hour at 3 replicas, then stopped and deleted.
134134

135135
Price calculation for compute will vary over time due to auto-scaling:

pages/public_cloud/ai_machine_learning/deploy_guide_06_billing_concept/guide.en-asia.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -17,11 +17,11 @@ OVHcloud AI Deploy service provides easiness in AI models and application deploy
1717

1818
## Introduction
1919

20-
AI Deploy is linked to a Public Cloud project. The whole project is billed at the end of the month, with pay-as-you-go. This means you will only pay for what you consume, based on the compute resources you use (CPus and GPUs) and their running time. At this time, we do not support a "pay per call" pricing.
20+
AI Deploy is linked to a Public Cloud project. The whole project is billed at the end of the month, with pay-as-you-go. This means you will only pay for what you consume, based on the compute resources you use (CPUs and GPUs) and their running time. At this time, we do not support a "pay per call" pricing.
2121

2222
## AI Deploy apps lifecycle
2323

24-
OVHcloud AI deploy allows deployment of Docker images, and each deployment is called an `app`.
24+
OVHcloud AI Deploy allows deployment of Docker images, and each deployment is called an `app`.
2525
During its lifetime, the app will go through the following status:
2626

2727
- `QUEUED`: the app deployment request is about to be processed.
@@ -99,10 +99,10 @@ We deploy one AI Deploy app, with 2 x GPUs and we keep it running for 10 hours t
9999

100100
We receive thousands of calls: it's included (no pay per call provided, you pay running compute).
101101

102-
- compute resources per replica : 2 x GPU NVIDIA V100s (1,95€ / hour)
103-
- scaling : fixed
104-
- replicas : 1 only
105-
- amount of calls : unlimited
102+
- compute resources per replica: 2 x GPU NVIDIA V100s (1,95€ / hour)
103+
- scaling: fixed
104+
- replicas: 1 only
105+
- amount of calls: unlimited
106106
- duration: 10 hours then deleted
107107

108108
Price calculation for compute: 10 (hours) x 2 (GPU) x 1 (replica) x 1,93€ (price / GPU) = **39 euros**, billed at the end of the month.
@@ -114,9 +114,9 @@ We start 15 x AI Deploy apps in parallel, each of them with one vCPU.
114114
We receive thousands of calls: it's included (no pay per call provided, you pay running compute).
115115

116116
- compute resources per app with fixed scaling: 1 x vCPU (0,03€ /hour /cpu)
117-
- scaling : fixed
118-
- replica : 1 only
119-
- amount of calls : unlimited
117+
- scaling: fixed
118+
- replica: 1 only
119+
- amount of calls: unlimited
120120
- duration: 5 hours then deleted
121121

122122
Price calculation for compute: 15 (app) x 5 (hours) x 1 (CPU) x 0,03€ (price / CPU) = **2,25 euros**, billed at the end of the month.
@@ -127,9 +127,9 @@ We start 1 x AI Deploy app with autoscaling configured to 1 replica minimum, and
127127

128128
We receive thousands of calls: it's included (no pay per call provided, you pay running compute).
129129

130-
- compute resources per replica : 1 x GPU (1,93€ /hour /gpu)
131-
- scaling : auto-scaling, from 1 to 3 replicas
132-
- amount of calls : unlimited
130+
- compute resources per replica: 1 x GPU (1,93€ /hour /gpu)
131+
- scaling: auto-scaling, from 1 to 3 replicas
132+
- amount of calls: unlimited
133133
- duration: 5 hours with 1 replica running, then a peak with 1 hour at 3 replicas, then stopped and deleted.
134134

135135
Price calculation for compute will vary over time due to auto-scaling:

pages/public_cloud/ai_machine_learning/deploy_guide_06_billing_concept/guide.en-au.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -17,11 +17,11 @@ OVHcloud AI Deploy service provides easiness in AI models and application deploy
1717

1818
## Introduction
1919

20-
AI Deploy is linked to a Public Cloud project. The whole project is billed at the end of the month, with pay-as-you-go. This means you will only pay for what you consume, based on the compute resources you use (CPus and GPUs) and their running time. At this time, we do not support a "pay per call" pricing.
20+
AI Deploy is linked to a Public Cloud project. The whole project is billed at the end of the month, with pay-as-you-go. This means you will only pay for what you consume, based on the compute resources you use (CPUs and GPUs) and their running time. At this time, we do not support a "pay per call" pricing.
2121

2222
## AI Deploy apps lifecycle
2323

24-
OVHcloud AI deploy allows deployment of Docker images, and each deployment is called an `app`.
24+
OVHcloud AI Deploy allows deployment of Docker images, and each deployment is called an `app`.
2525
During its lifetime, the app will go through the following status:
2626

2727
- `QUEUED`: the app deployment request is about to be processed.
@@ -99,10 +99,10 @@ We deploy one AI Deploy app, with 2 x GPUs and we keep it running for 10 hours t
9999

100100
We receive thousands of calls: it's included (no pay per call provided, you pay running compute).
101101

102-
- compute resources per replica : 2 x GPU NVIDIA V100s (1,95€ / hour)
103-
- scaling : fixed
104-
- replicas : 1 only
105-
- amount of calls : unlimited
102+
- compute resources per replica: 2 x GPU NVIDIA V100s (1,95€ / hour)
103+
- scaling: fixed
104+
- replicas: 1 only
105+
- amount of calls: unlimited
106106
- duration: 10 hours then deleted
107107

108108
Price calculation for compute: 10 (hours) x 2 (GPU) x 1 (replica) x 1,93€ (price / GPU) = **39 euros**, billed at the end of the month.
@@ -114,9 +114,9 @@ We start 15 x AI Deploy apps in parallel, each of them with one vCPU.
114114
We receive thousands of calls: it's included (no pay per call provided, you pay running compute).
115115

116116
- compute resources per app with fixed scaling: 1 x vCPU (0,03€ /hour /cpu)
117-
- scaling : fixed
118-
- replica : 1 only
119-
- amount of calls : unlimited
117+
- scaling: fixed
118+
- replica: 1 only
119+
- amount of calls: unlimited
120120
- duration: 5 hours then deleted
121121

122122
Price calculation for compute: 15 (app) x 5 (hours) x 1 (CPU) x 0,03€ (price / CPU) = **2,25 euros**, billed at the end of the month.
@@ -127,9 +127,9 @@ We start 1 x AI Deploy app with autoscaling configured to 1 replica minimum, and
127127

128128
We receive thousands of calls: it's included (no pay per call provided, you pay running compute).
129129

130-
- compute resources per replica : 1 x GPU (1,93€ /hour /gpu)
131-
- scaling : auto-scaling, from 1 to 3 replicas
132-
- amount of calls : unlimited
130+
- compute resources per replica: 1 x GPU (1,93€ /hour /gpu)
131+
- scaling: auto-scaling, from 1 to 3 replicas
132+
- amount of calls: unlimited
133133
- duration: 5 hours with 1 replica running, then a peak with 1 hour at 3 replicas, then stopped and deleted.
134134

135135
Price calculation for compute will vary over time due to auto-scaling:

pages/public_cloud/ai_machine_learning/deploy_guide_06_billing_concept/guide.en-ca.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -17,11 +17,11 @@ OVHcloud AI Deploy service provides easiness in AI models and application deploy
1717

1818
## Introduction
1919

20-
AI Deploy is linked to a Public Cloud project. The whole project is billed at the end of the month, with pay-as-you-go. This means you will only pay for what you consume, based on the compute resources you use (CPus and GPUs) and their running time. At this time, we do not support a "pay per call" pricing.
20+
AI Deploy is linked to a Public Cloud project. The whole project is billed at the end of the month, with pay-as-you-go. This means you will only pay for what you consume, based on the compute resources you use (CPUs and GPUs) and their running time. At this time, we do not support a "pay per call" pricing.
2121

2222
## AI Deploy apps lifecycle
2323

24-
OVHcloud AI deploy allows deployment of Docker images, and each deployment is called an `app`.
24+
OVHcloud AI Deploy allows deployment of Docker images, and each deployment is called an `app`.
2525
During its lifetime, the app will go through the following status:
2626

2727
- `QUEUED`: the app deployment request is about to be processed.
@@ -99,10 +99,10 @@ We deploy one AI Deploy app, with 2 x GPUs and we keep it running for 10 hours t
9999

100100
We receive thousands of calls: it's included (no pay per call provided, you pay running compute).
101101

102-
- compute resources per replica : 2 x GPU NVIDIA V100s (1,95€ / hour)
103-
- scaling : fixed
104-
- replicas : 1 only
105-
- amount of calls : unlimited
102+
- compute resources per replica: 2 x GPU NVIDIA V100s (1,95€ / hour)
103+
- scaling: fixed
104+
- replicas: 1 only
105+
- amount of calls: unlimited
106106
- duration: 10 hours then deleted
107107

108108
Price calculation for compute: 10 (hours) x 2 (GPU) x 1 (replica) x 1,93€ (price / GPU) = **39 euros**, billed at the end of the month.
@@ -114,9 +114,9 @@ We start 15 x AI Deploy apps in parallel, each of them with one vCPU.
114114
We receive thousands of calls: it's included (no pay per call provided, you pay running compute).
115115

116116
- compute resources per app with fixed scaling: 1 x vCPU (0,03€ /hour /cpu)
117-
- scaling : fixed
118-
- replica : 1 only
119-
- amount of calls : unlimited
117+
- scaling: fixed
118+
- replica: 1 only
119+
- amount of calls: unlimited
120120
- duration: 5 hours then deleted
121121

122122
Price calculation for compute: 15 (app) x 5 (hours) x 1 (CPU) x 0,03€ (price / CPU) = **2,25 euros**, billed at the end of the month.
@@ -127,9 +127,9 @@ We start 1 x AI Deploy app with autoscaling configured to 1 replica minimum, and
127127

128128
We receive thousands of calls: it's included (no pay per call provided, you pay running compute).
129129

130-
- compute resources per replica : 1 x GPU (1,93€ /hour /gpu)
131-
- scaling : auto-scaling, from 1 to 3 replicas
132-
- amount of calls : unlimited
130+
- compute resources per replica: 1 x GPU (1,93€ /hour /gpu)
131+
- scaling: auto-scaling, from 1 to 3 replicas
132+
- amount of calls: unlimited
133133
- duration: 5 hours with 1 replica running, then a peak with 1 hour at 3 replicas, then stopped and deleted.
134134

135135
Price calculation for compute will vary over time due to auto-scaling:

pages/public_cloud/ai_machine_learning/deploy_guide_06_billing_concept/guide.en-gb.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -17,11 +17,11 @@ OVHcloud AI Deploy service provides easiness in AI models and application deploy
1717

1818
## Introduction
1919

20-
AI Deploy is linked to a Public Cloud project. The whole project is billed at the end of the month, with pay-as-you-go. This means you will only pay for what you consume, based on the compute resources you use (CPus and GPUs) and their running time. At this time, we do not support a "pay per call" pricing.
20+
AI Deploy is linked to a Public Cloud project. The whole project is billed at the end of the month, with pay-as-you-go. This means you will only pay for what you consume, based on the compute resources you use (CPUs and GPUs) and their running time. At this time, we do not support a "pay per call" pricing.
2121

2222
## AI Deploy apps lifecycle
2323

24-
OVHcloud AI deploy allows deployment of Docker images, and each deployment is called an `app`.
24+
OVHcloud AI Deploy allows deployment of Docker images, and each deployment is called an `app`.
2525
During its lifetime, the app will go through the following status:
2626

2727
- `QUEUED`: the app deployment request is about to be processed.
@@ -99,10 +99,10 @@ We deploy one AI Deploy app, with 2 x GPUs and we keep it running for 10 hours t
9999

100100
We receive thousands of calls: it's included (no pay per call provided, you pay running compute).
101101

102-
- compute resources per replica : 2 x GPU NVIDIA V100s (1,95€ / hour)
103-
- scaling : fixed
104-
- replicas : 1 only
105-
- amount of calls : unlimited
102+
- compute resources per replica: 2 x GPU NVIDIA V100s (1,95€ / hour)
103+
- scaling: fixed
104+
- replicas: 1 only
105+
- amount of calls: unlimited
106106
- duration: 10 hours then deleted
107107

108108
Price calculation for compute: 10 (hours) x 2 (GPU) x 1 (replica) x 1,93€ (price / GPU) = **39 euros**, billed at the end of the month.
@@ -114,9 +114,9 @@ We start 15 x AI Deploy apps in parallel, each of them with one vCPU.
114114
We receive thousands of calls: it's included (no pay per call provided, you pay running compute).
115115

116116
- compute resources per app with fixed scaling: 1 x vCPU (0,03€ /hour /cpu)
117-
- scaling : fixed
118-
- replica : 1 only
119-
- amount of calls : unlimited
117+
- scaling: fixed
118+
- replica: 1 only
119+
- amount of calls: unlimited
120120
- duration: 5 hours then deleted
121121

122122
Price calculation for compute: 15 (app) x 5 (hours) x 1 (CPU) x 0,03€ (price / CPU) = **2,25 euros**, billed at the end of the month.
@@ -127,9 +127,9 @@ We start 1 x AI Deploy app with autoscaling configured to 1 replica minimum, and
127127

128128
We receive thousands of calls: it's included (no pay per call provided, you pay running compute).
129129

130-
- compute resources per replica : 1 x GPU (1,93€ /hour /gpu)
131-
- scaling : auto-scaling, from 1 to 3 replicas
132-
- amount of calls : unlimited
130+
- compute resources per replica: 1 x GPU (1,93€ /hour /gpu)
131+
- scaling: auto-scaling, from 1 to 3 replicas
132+
- amount of calls: unlimited
133133
- duration: 5 hours with 1 replica running, then a peak with 1 hour at 3 replicas, then stopped and deleted.
134134

135135
Price calculation for compute will vary over time due to auto-scaling:

pages/public_cloud/ai_machine_learning/deploy_guide_06_billing_concept/guide.en-ie.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -17,11 +17,11 @@ OVHcloud AI Deploy service provides easiness in AI models and application deploy
1717

1818
## Introduction
1919

20-
AI Deploy is linked to a Public Cloud project. The whole project is billed at the end of the month, with pay-as-you-go. This means you will only pay for what you consume, based on the compute resources you use (CPus and GPUs) and their running time. At this time, we do not support a "pay per call" pricing.
20+
AI Deploy is linked to a Public Cloud project. The whole project is billed at the end of the month, with pay-as-you-go. This means you will only pay for what you consume, based on the compute resources you use (CPUs and GPUs) and their running time. At this time, we do not support a "pay per call" pricing.
2121

2222
## AI Deploy apps lifecycle
2323

24-
OVHcloud AI deploy allows deployment of Docker images, and each deployment is called an `app`.
24+
OVHcloud AI Deploy allows deployment of Docker images, and each deployment is called an `app`.
2525
During its lifetime, the app will go through the following status:
2626

2727
- `QUEUED`: the app deployment request is about to be processed.
@@ -99,10 +99,10 @@ We deploy one AI Deploy app, with 2 x GPUs and we keep it running for 10 hours t
9999

100100
We receive thousands of calls: it's included (no pay per call provided, you pay running compute).
101101

102-
- compute resources per replica : 2 x GPU NVIDIA V100s (1,95€ / hour)
103-
- scaling : fixed
104-
- replicas : 1 only
105-
- amount of calls : unlimited
102+
- compute resources per replica: 2 x GPU NVIDIA V100s (1,95€ / hour)
103+
- scaling: fixed
104+
- replicas: 1 only
105+
- amount of calls: unlimited
106106
- duration: 10 hours then deleted
107107

108108
Price calculation for compute: 10 (hours) x 2 (GPU) x 1 (replica) x 1,93€ (price / GPU) = **39 euros**, billed at the end of the month.
@@ -114,9 +114,9 @@ We start 15 x AI Deploy apps in parallel, each of them with one vCPU.
114114
We receive thousands of calls: it's included (no pay per call provided, you pay running compute).
115115

116116
- compute resources per app with fixed scaling: 1 x vCPU (0,03€ /hour /cpu)
117-
- scaling : fixed
118-
- replica : 1 only
119-
- amount of calls : unlimited
117+
- scaling: fixed
118+
- replica: 1 only
119+
- amount of calls: unlimited
120120
- duration: 5 hours then deleted
121121

122122
Price calculation for compute: 15 (app) x 5 (hours) x 1 (CPU) x 0,03€ (price / CPU) = **2,25 euros**, billed at the end of the month.
@@ -127,9 +127,9 @@ We start 1 x AI Deploy app with autoscaling configured to 1 replica minimum, and
127127

128128
We receive thousands of calls: it's included (no pay per call provided, you pay running compute).
129129

130-
- compute resources per replica : 1 x GPU (1,93€ /hour /gpu)
131-
- scaling : auto-scaling, from 1 to 3 replicas
132-
- amount of calls : unlimited
130+
- compute resources per replica: 1 x GPU (1,93€ /hour /gpu)
131+
- scaling: auto-scaling, from 1 to 3 replicas
132+
- amount of calls: unlimited
133133
- duration: 5 hours with 1 replica running, then a peak with 1 hour at 3 replicas, then stopped and deleted.
134134

135135
Price calculation for compute will vary over time due to auto-scaling:

0 commit comments

Comments
 (0)