Skip to content

Commit 95d1c95

Browse files
committed
Add information and warning message to raise awareness of the alpha phase and the public access limitation
1 parent bbb63cd commit 95d1c95

File tree

15 files changed

+150
-15
lines changed

15 files changed

+150
-15
lines changed

pages/platform/data-processing/42_TUTORIAL_notebook-data-cleaning/guide.de-de.md

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,11 @@ updated: 2023-03-14
1111

1212
**Last updated March 14th, 2023**
1313

14+
> [!primary]
15+
>
16+
> The Notebooks for Apache Spark feature is in `alpha`. During the alpha-testing phase, the infrastructure’s availability and data longevity are not guaranteed. Please do not use this service for applications that are in production, while this phase is not complete.
17+
>
18+
1419
## Objective
1520

1621
The purpose of this tutorial is to show how to clean data with [Apache Spark](https://spark.apache.org/) inside a [Jupyter Notebook](https://jupyter.org/).
@@ -40,7 +45,11 @@ Select both files from your computer and add them to the root `/` of your bucket
4045

4146
### Retrieve bucket credentials
4247

43-
There are a few information that we will need as inputs of the notebook.
48+
> [!warning]
49+
>
50+
> Please be aware that notebooks are only available in `public access` during the `alpha` of the Notebooks for Apache Spark feature. As such, be careful of the **data** and the **credentials** you may expose in these notebooks.
51+
52+
There is a few information that we will need as inputs of the notebook.
4453

4554
First, and while we're on the container page of the [OVHcloud Control Panel](https://www.ovh.com/auth/?action=gotomanager&from=https://www.ovh.de/&ovhSubsidiary=de) we will copy the `Endpoint` information and save it.
4655

pages/platform/data-processing/42_TUTORIAL_notebook-data-cleaning/guide.en-asia.md

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,11 @@ updated: 2023-03-14
99

1010
**Last updated March 14th, 2023**
1111

12+
> [!primary]
13+
>
14+
> The Notebooks for Apache Spark feature is in `alpha`. During the alpha-testing phase, the infrastructure’s availability and data longevity are not guaranteed. Please do not use this service for applications that are in production, while this phase is not complete.
15+
>
16+
1217
## Objective
1318

1419
The purpose of this tutorial is to show how to clean data with [Apache Spark](https://spark.apache.org/) inside a [Jupyter Notebook](https://jupyter.org/).
@@ -38,7 +43,11 @@ Select both files from your computer and add them to the root `/` of your bucket
3843

3944
### Retrieve bucket credentials
4045

41-
There are a few information that we will need as inputs of the notebook.
46+
> [!warning]
47+
>
48+
> Please be aware that notebooks are only available in `public access` during the `alpha` of the Notebooks for Apache Spark feature. As such, be careful of the **data** and the **credentials** you may expose in these notebooks.
49+
50+
There is a few information that we will need as inputs of the notebook.
4251

4352
First, and while we're on the container page of the [OVHcloud Control Panel](https://ca.ovh.com/auth/?action=gotomanager&from=https://www.ovh.com/asia/&ovhSubsidiary=asia) we will copy the `Endpoint` information and save it.
4453

pages/platform/data-processing/42_TUTORIAL_notebook-data-cleaning/guide.en-au.md

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,11 @@ updated: 2023-03-14
99

1010
**Last updated March 14th, 2023**
1111

12+
> [!primary]
13+
>
14+
> The Notebooks for Apache Spark feature is in `alpha`. During the alpha-testing phase, the infrastructure’s availability and data longevity are not guaranteed. Please do not use this service for applications that are in production, while this phase is not complete.
15+
>
16+
1217
## Objective
1318

1419
The purpose of this tutorial is to show how to clean data with [Apache Spark](https://spark.apache.org/) inside a [Jupyter Notebook](https://jupyter.org/).
@@ -38,7 +43,11 @@ Select both files from your computer and add them to the root `/` of your bucket
3843

3944
### Retrieve bucket credentials
4045

41-
There are a few information that we will need as inputs of the notebook.
46+
> [!warning]
47+
>
48+
> Please be aware that notebooks are only available in `public access` during the `alpha` of the Notebooks for Apache Spark feature. As such, be careful of the **data** and the **credentials** you may expose in these notebooks.
49+
50+
There is a few information that we will need as inputs of the notebook.
4251

4352
First, and while we're on the container page of the [OVHcloud Control Panel](https://ca.ovh.com/auth/?action=gotomanager&from=https://www.ovh.com.au/&ovhSubsidiary=au) we will copy the `Endpoint` information and save it.
4453

pages/platform/data-processing/42_TUTORIAL_notebook-data-cleaning/guide.en-ca.md

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,11 @@ updated: 2023-03-14
99

1010
**Last updated March 14th, 2023**
1111

12+
> [!primary]
13+
>
14+
> The Notebooks for Apache Spark feature is in `alpha`. During the alpha-testing phase, the infrastructure’s availability and data longevity are not guaranteed. Please do not use this service for applications that are in production, while this phase is not complete.
15+
>
16+
1217
## Objective
1318

1419
The purpose of this tutorial is to show how to clean data with [Apache Spark](https://spark.apache.org/) inside a [Jupyter Notebook](https://jupyter.org/).
@@ -38,7 +43,11 @@ Select both files from your computer and add them to the root `/` of your bucket
3843

3944
### Retrieve bucket credentials
4045

41-
There are a few information that we will need as inputs of the notebook.
46+
> [!warning]
47+
>
48+
> Please be aware that notebooks are only available in `public access` during the `alpha` of the Notebooks for Apache Spark feature. As such, be careful of the **data** and the **credentials** you may expose in these notebooks.
49+
50+
There is a few information that we will need as inputs of the notebook.
4251

4352
First, and while we're on the container page of the [OVHcloud Control Panel](https://ca.ovh.com/auth/?action=gotomanager&from=https://www.ovh.com/ca/en/&ovhSubsidiary=ca) we will copy the `Endpoint` information and save it.
4453

pages/platform/data-processing/42_TUTORIAL_notebook-data-cleaning/guide.en-gb.md

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,11 @@ updated: 2023-03-14
99

1010
**Last updated March 14th, 2023**
1111

12+
> [!primary]
13+
>
14+
> The Notebooks for Apache Spark feature is in `alpha`. During the alpha-testing phase, the infrastructure’s availability and data longevity are not guaranteed. Please do not use this service for applications that are in production, while this phase is not complete.
15+
>
16+
1217
## Objective
1318

1419
The purpose of this tutorial is to show how to clean data with [Apache Spark](https://spark.apache.org/) inside a [Jupyter Notebook](https://jupyter.org/).
@@ -38,7 +43,11 @@ Select both files from your computer and add them to the root `/` of your bucket
3843

3944
### Retrieve bucket credentials
4045

41-
There are a few information that we will need as inputs of the notebook.
46+
> [!warning]
47+
>
48+
> Please be aware that notebooks are only available in `public access` during the `alpha` of the Notebooks for Apache Spark feature. As such, be careful of the **data** and the **credentials** you may expose in these notebooks.
49+
50+
There is a few information that we will need as inputs of the notebook.
4251

4352
First, and while we're on the container page of the [OVHcloud Control Panel](https://www.ovh.com/auth/?action=gotomanager&from=https://www.ovh.co.uk/&ovhSubsidiary=GB) we will copy the `Endpoint` information and save it.
4453

pages/platform/data-processing/42_TUTORIAL_notebook-data-cleaning/guide.en-ie.md

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,11 @@ updated: 2023-03-14
99

1010
**Last updated March 14th, 2023**
1111

12+
> [!primary]
13+
>
14+
> The Notebooks for Apache Spark feature is in `alpha`. During the alpha-testing phase, the infrastructure’s availability and data longevity are not guaranteed. Please do not use this service for applications that are in production, while this phase is not complete.
15+
>
16+
1217
## Objective
1318

1419
The purpose of this tutorial is to show how to clean data with [Apache Spark](https://spark.apache.org/) inside a [Jupyter Notebook](https://jupyter.org/).
@@ -38,7 +43,11 @@ Select both files from your computer and add them to the root `/` of your bucket
3843

3944
### Retrieve bucket credentials
4045

41-
There are a few information that we will need as inputs of the notebook.
46+
> [!warning]
47+
>
48+
> Please be aware that notebooks are only available in `public access` during the `alpha` of the Notebooks for Apache Spark feature. As such, be careful of the **data** and the **credentials** you may expose in these notebooks.
49+
50+
There is a few information that we will need as inputs of the notebook.
4251

4352
First, and while we're on the container page of the [OVHcloud Control Panel](https://www.ovh.com/auth/?action=gotomanager&from=https://www.ovh.ie/&ovhSubsidiary=ie) we will copy the `Endpoint` information and save it.
4453

pages/platform/data-processing/42_TUTORIAL_notebook-data-cleaning/guide.en-sg.md

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,11 @@ updated: 2023-03-14
99

1010
**Last updated March 14th, 2023**
1111

12+
> [!primary]
13+
>
14+
> The Notebooks for Apache Spark feature is in `alpha`. During the alpha-testing phase, the infrastructure’s availability and data longevity are not guaranteed. Please do not use this service for applications that are in production, while this phase is not complete.
15+
>
16+
1217
## Objective
1318

1419
The purpose of this tutorial is to show how to clean data with [Apache Spark](https://spark.apache.org/) inside a [Jupyter Notebook](https://jupyter.org/).
@@ -38,7 +43,11 @@ Select both files from your computer and add them to the root `/` of your bucket
3843

3944
### Retrieve bucket credentials
4045

41-
There are a few information that we will need as inputs of the notebook.
46+
> [!warning]
47+
>
48+
> Please be aware that notebooks are only available in `public access` during the `alpha` of the Notebooks for Apache Spark feature. As such, be careful of the **data** and the **credentials** you may expose in these notebooks.
49+
50+
There is a few information that we will need as inputs of the notebook.
4251

4352
First, and while we're on the container page of the [OVHcloud Control Panel](https://ca.ovh.com/auth/?action=gotomanager&from=https://www.ovh.com/sg/&ovhSubsidiary=sg) we will copy the `Endpoint` information and save it.
4453

pages/platform/data-processing/42_TUTORIAL_notebook-data-cleaning/guide.en-us.md

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,11 @@ updated: 2023-03-14
99

1010
**Last updated March 14th, 2023**
1111

12+
> [!primary]
13+
>
14+
> The Notebooks for Apache Spark feature is in `alpha`. During the alpha-testing phase, the infrastructure’s availability and data longevity are not guaranteed. Please do not use this service for applications that are in production, while this phase is not complete.
15+
>
16+
1217
## Objective
1318

1419
The purpose of this tutorial is to show how to clean data with [Apache Spark](https://spark.apache.org/) inside a [Jupyter Notebook](https://jupyter.org/).
@@ -38,7 +43,11 @@ Select both files from your computer and add them to the root `/` of your bucket
3843

3944
### Retrieve bucket credentials
4045

41-
There are a few information that we will need as inputs of the notebook.
46+
> [!warning]
47+
>
48+
> Please be aware that notebooks are only available in `public access` during the `alpha` of the Notebooks for Apache Spark feature. As such, be careful of the **data** and the **credentials** you may expose in these notebooks.
49+
50+
There is a few information that we will need as inputs of the notebook.
4251

4352
First, and while we're on the container page of the [OVHcloud Control Panel](https://ca.ovh.com/auth/?action=gotomanager&from=https://www.ovh.com/world/&ovhSubsidiary=we) we will copy the `Endpoint` information and save it.
4453

pages/platform/data-processing/42_TUTORIAL_notebook-data-cleaning/guide.es-es.md

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,11 @@ updated: 2023-03-14
1111

1212
**Last updated March 14th, 2023**
1313

14+
> [!primary]
15+
>
16+
> The Notebooks for Apache Spark feature is in `alpha`. During the alpha-testing phase, the infrastructure’s availability and data longevity are not guaranteed. Please do not use this service for applications that are in production, while this phase is not complete.
17+
>
18+
1419
## Objective
1520

1621
The purpose of this tutorial is to show how to clean data with [Apache Spark](https://spark.apache.org/) inside a [Jupyter Notebook](https://jupyter.org/).
@@ -40,7 +45,11 @@ Select both files from your computer and add them to the root `/` of your bucket
4045

4146
### Retrieve bucket credentials
4247

43-
There are a few information that we will need as inputs of the notebook.
48+
> [!warning]
49+
>
50+
> Please be aware that notebooks are only available in `public access` during the `alpha` of the Notebooks for Apache Spark feature. As such, be careful of the **data** and the **credentials** you may expose in these notebooks.
51+
52+
There is a few information that we will need as inputs of the notebook.
4453

4554
First, and while we're on the container page of the [OVHcloud Control Panel](https://www.ovh.com/auth/?action=gotomanager&from=https://www.ovh.es/&ovhSubsidiary=es) we will copy the `Endpoint` information and save it.
4655

pages/platform/data-processing/42_TUTORIAL_notebook-data-cleaning/guide.es-us.md

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,11 @@ updated: 2023-03-14
1111

1212
**Last updated March 14th, 2023**
1313

14+
> [!primary]
15+
>
16+
> The Notebooks for Apache Spark feature is in `alpha`. During the alpha-testing phase, the infrastructure’s availability and data longevity are not guaranteed. Please do not use this service for applications that are in production, while this phase is not complete.
17+
>
18+
1419
## Objective
1520

1621
The purpose of this tutorial is to show how to clean data with [Apache Spark](https://spark.apache.org/) inside a [Jupyter Notebook](https://jupyter.org/).
@@ -40,7 +45,11 @@ Select both files from your computer and add them to the root `/` of your bucket
4045

4146
### Retrieve bucket credentials
4247

43-
There are a few information that we will need as inputs of the notebook.
48+
> [!warning]
49+
>
50+
> Please be aware that notebooks are only available in `public access` during the `alpha` of the Notebooks for Apache Spark feature. As such, be careful of the **data** and the **credentials** you may expose in these notebooks.
51+
52+
There is a few information that we will need as inputs of the notebook.
4453

4554
First, and while we're on the container page of the [OVHcloud Control Panel](https://www.ovh.com/auth/?action=gotomanager&from=https://www.ovh.es/&ovhSubsidiary=es) we will copy the `Endpoint` information and save it.
4655

0 commit comments

Comments
 (0)