You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Then, from the [OVHcloud Control Panel](https://www.ovh.com/auth/?action=gotomanager&from=https://www.ovh.de/&ovhSubsidiary=de), go to the Object Storage section, locate your S3 bucket and upload your data by clicking `Add object`{.action}.
41
42
42
-
Select both files from your computer and add them to the root `/` of your bucket.
43
+
Select both files from your computer and add them to the root (`/`) of your bucket.
@@ -49,29 +50,29 @@ Select both files from your computer and add them to the root `/` of your bucket
49
50
>
50
51
> Please be aware that notebooks are only available in `public access` during the `alpha` of the Notebooks for Apache Spark feature. As such, be careful of the **data** and the **credentials** you may expose in these notebooks.
51
52
52
-
There is a few information that we will need as inputs of the notebook.
53
+
There is some information that we will need as inputs of the notebook.
53
54
54
55
First, and while we're on the container page of the [OVHcloud Control Panel](https://www.ovh.com/auth/?action=gotomanager&from=https://www.ovh.de/&ovhSubsidiary=de) we will copy the `Endpoint` information and save it.
55
56
56
57
Go back to the Object Storage home page and then to the S3 users tab, copy the user's `access key` and save it.
57
58
58
-
Finally, click on action "hamburger" at the end of the user row`(...)`{.action} >`View the secret key`{.action}, copy the value and save it.
59
+
Finally, click on the `...`{.action} button at the end of the user row, click on`View the secret key`{.action}, copy the value and save it.
59
60
60
61
### Launch and access a Notebook for Apache Spark
61
62
62
-
From the [OVHcloud Control Panel](https://www.ovh.com/auth/?action=gotomanager&from=https://www.ovh.de/&ovhSubsidiary=de), go to the Data Processing section and create a new notebook by clicking `Data Processing`{.action} >`Create notebook`{.action}.
63
+
From the [OVHcloud Control Panel](https://www.ovh.com/auth/?action=gotomanager&from=https://www.ovh.de/&ovhSubsidiary=de), go to the Data Processing section and create a new notebook by clicking `Data Processing`{.action} and then`Create notebook`{.action}.
63
64
64
65
You can then reach the `JupyterLab` URL directly from the notebooks list or from the notebook page.
65
66
66
-
### Experiment with notebook
67
+
### Experiment with the notebook
67
68
68
-
Now that you have your initial datasets ready on an Object Storage and a notebook running, you could start cleaning this data!
69
+
Now that you have your initial datasets ready on an Object Storage and a notebook running, you can start cleaning this data.
69
70
70
71
A preview of this notebook can be found on [GitHub](https://github.com/ovh/data-processing-samples/blob/master/apache_spark_notebook_data_cleaning/apache_spark_notebook_data_cleaning_tutorial.ipynb).
71
72
72
73
### Go further
73
74
74
-
- Do you want to create a data cleaning job you could replay based on your notebook? [Here it is](https://docs.ovh.com/de/data-processing/submit-python/).
75
+
- Do you want to create a data cleaning job you could replay based on your notebook? [Please refer to this guide](https://docs.ovh.com/de/data-processing/submit-python/).
Then, from the [OVHcloud Control Panel](https://ca.ovh.com/auth/?action=gotomanager&from=https://www.ovh.com/asia/&ovhSubsidiary=asia), go to the Object Storage section, locate your S3 bucket and upload your data by clicking `Add object`{.action}.
39
39
40
-
Select both files from your computer and add them to the root `/` of your bucket.
40
+
Select both files from your computer and add them to the root (`/`) of your bucket.
@@ -47,29 +47,29 @@ Select both files from your computer and add them to the root `/` of your bucket
47
47
>
48
48
> Please be aware that notebooks are only available in `public access` during the `alpha` of the Notebooks for Apache Spark feature. As such, be careful of the **data** and the **credentials** you may expose in these notebooks.
49
49
50
-
There is a few information that we will need as inputs of the notebook.
50
+
There is some information that we will need as inputs of the notebook.
51
51
52
52
First, and while we're on the container page of the [OVHcloud Control Panel](https://ca.ovh.com/auth/?action=gotomanager&from=https://www.ovh.com/asia/&ovhSubsidiary=asia) we will copy the `Endpoint` information and save it.
53
53
54
54
Go back to the Object Storage home page and then to the S3 users tab, copy the user's `access key` and save it.
55
55
56
-
Finally, click on action "hamburger" at the end of the user row`(...)`{.action} >`View the secret key`{.action}, copy the value and save it.
56
+
Finally, click on the `...`{.action} button at the end of the user row, click on`View the secret key`{.action}, copy the value and save it.
57
57
58
58
### Launch and access a Notebook for Apache Spark
59
59
60
-
From the [OVHcloud Control Panel](https://ca.ovh.com/auth/?action=gotomanager&from=https://www.ovh.com/asia/&ovhSubsidiary=asia), go to the Data Processing section and create a new notebook by clicking `Data Processing`{.action} >`Create notebook`{.action}.
60
+
From the [OVHcloud Control Panel](https://ca.ovh.com/auth/?action=gotomanager&from=https://www.ovh.com/asia/&ovhSubsidiary=asia), go to the Data Processing section and create a new notebook by clicking `Data Processing`{.action} and then`Create notebook`{.action}.
61
61
62
62
You can then reach the `JupyterLab` URL directly from the notebooks list or from the notebook page.
63
63
64
-
### Experiment with notebook
64
+
### Experiment with the notebook
65
65
66
-
Now that you have your initial datasets ready on an Object Storage and a notebook running, you could start cleaning this data!
66
+
Now that you have your initial datasets ready on an Object Storage and a notebook running, you can start cleaning this data.
67
67
68
68
A preview of this notebook can be found on [GitHub](https://github.com/ovh/data-processing-samples/blob/master/apache_spark_notebook_data_cleaning/apache_spark_notebook_data_cleaning_tutorial.ipynb).
69
69
70
70
### Go further
71
71
72
-
- Do you want to create a data cleaning job you could replay based on your notebook? [Here it is](https://docs.ovh.com/asia/en/data-processing/submit-python/).
72
+
- Do you want to create a data cleaning job you could replay based on your notebook? [Please refer to this guide](https://docs.ovh.com/asia/en/data-processing/submit-python/).
Then, from the [OVHcloud Control Panel](https://ca.ovh.com/auth/?action=gotomanager&from=https://www.ovh.com.au/&ovhSubsidiary=au), go to the Object Storage section, locate your S3 bucket and upload your data by clicking `Add object`{.action}.
39
39
40
-
Select both files from your computer and add them to the root `/` of your bucket.
40
+
Select both files from your computer and add them to the root (`/`) of your bucket.
@@ -47,29 +47,29 @@ Select both files from your computer and add them to the root `/` of your bucket
47
47
>
48
48
> Please be aware that notebooks are only available in `public access` during the `alpha` of the Notebooks for Apache Spark feature. As such, be careful of the **data** and the **credentials** you may expose in these notebooks.
49
49
50
-
There is a few information that we will need as inputs of the notebook.
50
+
There is some information that we will need as inputs of the notebook.
51
51
52
52
First, and while we're on the container page of the [OVHcloud Control Panel](https://ca.ovh.com/auth/?action=gotomanager&from=https://www.ovh.com.au/&ovhSubsidiary=au) we will copy the `Endpoint` information and save it.
53
53
54
54
Go back to the Object Storage home page and then to the S3 users tab, copy the user's `access key` and save it.
55
55
56
-
Finally, click on action "hamburger" at the end of the user row`(...)`{.action} >`View the secret key`{.action}, copy the value and save it.
56
+
Finally, click on the `...`{.action} button at the end of the user row, click on`View the secret key`{.action}, copy the value and save it.
57
57
58
58
### Launch and access a Notebook for Apache Spark
59
59
60
-
From the [OVHcloud Control Panel](https://ca.ovh.com/auth/?action=gotomanager&from=https://www.ovh.com.au/&ovhSubsidiary=au), go to the Data Processing section and create a new notebook by clicking `Data Processing`{.action} >`Create notebook`{.action}.
60
+
From the [OVHcloud Control Panel](https://ca.ovh.com/auth/?action=gotomanager&from=https://www.ovh.com.au/&ovhSubsidiary=au), go to the Data Processing section and create a new notebook by clicking `Data Processing`{.action} and then`Create notebook`{.action}.
61
61
62
62
You can then reach the `JupyterLab` URL directly from the notebooks list or from the notebook page.
63
63
64
-
### Experiment with notebook
64
+
### Experiment with the notebook
65
65
66
-
Now that you have your initial datasets ready on an Object Storage and a notebook running, you could start cleaning this data!
66
+
Now that you have your initial datasets ready on an Object Storage and a notebook running, you can start cleaning this data.
67
67
68
68
A preview of this notebook can be found on [GitHub](https://github.com/ovh/data-processing-samples/blob/master/apache_spark_notebook_data_cleaning/apache_spark_notebook_data_cleaning_tutorial.ipynb).
69
69
70
70
### Go further
71
71
72
-
- Do you want to create a data cleaning job you could replay based on your notebook? [Here it is](https://docs.ovh.com/au/en/data-processing/submit-python/).
72
+
- Do you want to create a data cleaning job you could replay based on your notebook? [Please refer to this guide](https://docs.ovh.com/au/en/data-processing/submit-python/).
Then, from the [OVHcloud Control Panel](https://ca.ovh.com/auth/?action=gotomanager&from=https://www.ovh.com/ca/en/&ovhSubsidiary=ca), go to the Object Storage section, locate your S3 bucket and upload your data by clicking `Add object`{.action}.
39
39
40
-
Select both files from your computer and add them to the root `/` of your bucket.
40
+
Select both files from your computer and add them to the root (`/`) of your bucket.
@@ -47,29 +47,29 @@ Select both files from your computer and add them to the root `/` of your bucket
47
47
>
48
48
> Please be aware that notebooks are only available in `public access` during the `alpha` of the Notebooks for Apache Spark feature. As such, be careful of the **data** and the **credentials** you may expose in these notebooks.
49
49
50
-
There is a few information that we will need as inputs of the notebook.
50
+
There is some information that we will need as inputs of the notebook.
51
51
52
52
First, and while we're on the container page of the [OVHcloud Control Panel](https://ca.ovh.com/auth/?action=gotomanager&from=https://www.ovh.com/ca/en/&ovhSubsidiary=ca) we will copy the `Endpoint` information and save it.
53
53
54
54
Go back to the Object Storage home page and then to the S3 users tab, copy the user's `access key` and save it.
55
55
56
-
Finally, click on action "hamburger" at the end of the user row`(...)`{.action} >`View the secret key`{.action}, copy the value and save it.
56
+
Finally, click on the `...`{.action} button at the end of the user row, click on`View the secret key`{.action}, copy the value and save it.
57
57
58
58
### Launch and access a Notebook for Apache Spark
59
59
60
-
From the [OVHcloud Control Panel](https://ca.ovh.com/auth/?action=gotomanager&from=https://www.ovh.com/ca/en/&ovhSubsidiary=ca), go to the Data Processing section and create a new notebook by clicking `Data Processing`{.action} >`Create notebook`{.action}.
60
+
From the [OVHcloud Control Panel](https://ca.ovh.com/auth/?action=gotomanager&from=https://www.ovh.com/ca/en/&ovhSubsidiary=ca), go to the Data Processing section and create a new notebook by clicking `Data Processing`{.action} and then`Create notebook`{.action}.
61
61
62
62
You can then reach the `JupyterLab` URL directly from the notebooks list or from the notebook page.
63
63
64
-
### Experiment with notebook
64
+
### Experiment with the notebook
65
65
66
-
Now that you have your initial datasets ready on an Object Storage and a notebook running, you could start cleaning this data!
66
+
Now that you have your initial datasets ready on an Object Storage and a notebook running, you can start cleaning this data.
67
67
68
68
A preview of this notebook can be found on [GitHub](https://github.com/ovh/data-processing-samples/blob/master/apache_spark_notebook_data_cleaning/apache_spark_notebook_data_cleaning_tutorial.ipynb).
69
69
70
70
### Go further
71
71
72
-
- Do you want to create a data cleaning job you could replay based on your notebook? [Here it is](https://docs.ovh.com/ca/en/data-processing/submit-python/).
72
+
- Do you want to create a data cleaning job you could replay based on your notebook? [Please refer to this guide](https://docs.ovh.com/ca/en/data-processing/submit-python/).
0 commit comments