You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In this lab we will build the infrastructure that we will use to run the rest of the workshop.
7
+
In this lab, we will build the infrastructure that we will use to run the rest of the workshop.
8
+
9
+
The main four elements that we will be creating are:
6
10
7
-
The main element that we will be creating is a **Data Science** session and notebook, to experiment with the newly-generated data using notebooks.
11
+
-**Compute** instance using a Linux-based image from Oracle Cloud.
12
+
-**Autonomous JSON Database** where we'll allocate the JSON documents.
13
+
-**Data Science** session and notebook, to experiment with the newly-generated data using notebooks.
8
14
9
15

10
16
11
-
We will use Cloud Shell to execute _`start.sh`_ script that will call Terraform to deploy all the infrastructure required and setup the configuration. If you don't know about Terraform, don't worry, there is no need. Also, there are no installation requirements: we will use Cloud Shell (which has Terraform installed by default) to deploy our infrastructure.
17
+
We will use Cloud Shell to execute `start.sh` script, which will call Terraform and Ansible to deploy all the infrastructure required and setup the configuration. If you don't know about Terraform or Ansible, don't worry, there is no need.
12
18
13
-
Terraform is an Open Source tool to deploy resources in the cloud with code. You declare what you want in Oracle Cloud and Terraform make sure you get the resources created.
19
+
- Terraform is an Open Source tool to deploy resources in the cloud with code. You declare what you want in Oracle Cloud and Terraform make sure you get the resources created.
20
+
- Ansible is an Open Source tool to provision on top of the created resources. It automates the dependency installation, copies the source code, and config files so everything is ready for you to use.
14
21
15
-
> **Note**: in the figure above, there's also a compute instance and autonomous database deployed automatically by Terraform. These resources are completely optional and you can run the workshop without them. However, if you're interested in integrating everything that we'll talk about in the workshop with **your own datasets**, this is the way to do it. You can check out more information on this [in this workshop](https://oracle-devrel.github.io/leagueoflegends-optimizer/hols/workshops/dataextraction/index.html).
22
+
Do you want to learn more? Feel free to check the code for terraform and ansible after the workshop [in our official repository.](https://github.com/oracle-devrel/leagueoflegends-optimizer/)
16
23
17
-
Do you want to learn more? Feel free to check [Terraform's code in this repository](https://github.com/oracle-devrel/leagueoflegends-optimizer/tree/main/dev/terraform) after the workshop.
24
+
### Prerequisites
18
25
19
-
Estimated Time: 15 minutes
26
+
- An Oracle Free Tier, Paid or LiveLabs Cloud Account
27
+
- Active Oracle Cloud Account with available credits to use for Data Science service.
20
28
21
-
### Prerequisites
29
+
### Objectives
22
30
23
-
* An Oracle Free Tier, Paid or LiveLabs Cloud Account
24
-
* Active Oracle Cloud Account with available credits to use for Data Science service.
31
+
In this lab, you will learn how to:
25
32
33
+
- Use Oracle Cloud Infrastructure for your Compute needs
34
+
- Deploy resources using Terraform and Ansible
35
+
- Learn about federation, and what's necessary to authenticate a Terraform request
36
+
- Download the datasets we will use
26
37
27
38
## Task 1: Cloud Shell
28
39
40
+
First, we need to download the official repository to get access to all the code (Terraform and Ansible code for this step).
41
+
29
42
1. From the Oracle Cloud Console, click on **Cloud Shell**.
4. Change directory with `cd` to `leagueoflegends-optimizer` directory:
41
-
```
42
-
<copy>cd leagueoflegends-optimizer/dev</copy>
43
-
```
44
-
5. Terraform uses a file called `tfvars` that contains the variables Terraform uses to talk to Oracle Cloud and set up your deployment the way you want it. You are going to copy a template we provide to use your own values. Run on Cloud Shell the following command.
5. Terraform uses a file called `tfvars` that contains the variables Terraform uses to talk to Oracle Cloud and set up your deployment the way you want it. You are going to copy a template we provide to use your own values. Run on Cloud Shell the following command:
2. On the **Code Editor**, inside the Explorer section on the left panel, expand your username and navigate onto _`dev/terraform`_. You should see the file **`terraform.tfvars`**. Click on it:
56
-

75
+
>**Note**: for**Safari** users:<br>
76
+
> First, it isn't the recommended browser for OCI. Firefox or Chrome are fully tested and are recommended.<br>
77
+
> With Safari, if you get a message _Cannot Start Code Editor_, go to _**Settings** > **Privacy**_ and disable _**Prevent cross-site tracking**_.<br>
78
+
> Then open Code Editor again.
57
79
58
-
3. The file will open and you can copy values you will get from running commands on Cloud Shell and paste it on the Code Editor.
80
+
2. On the **Code Editor**, go to _**File** > **Open**_.
81
+

59
82
60
-
4. Copy the output of the following command as the region:
83
+
3. On the pop-up, edit the path by clicking the pencil icon:
84
+

61
85
86
+
4. Append, at the end, the path to the `terraform.tfvars`
> You can deploy the infrastructure **on a specific compartment**<br>
123
+
> **Note**: for experienced Oracle Cloud users:<br>
124
+
> Do you want to deploy the infrastructure on a specific compartment?<br>
86
125
> You can get the Compartment OCID in different ways.<br>
87
126
> The coolest one is with OCI CLI from the Cloud Shell.<br>
88
-
> You have to change _`COMPARTMENT_NAME`_ for the actual compartment name you are looking for and run the command:
89
-
> ```
90
-
> <copy>oci iam compartment list --all --compartment-id-in-subtree true --query "data[].id" --name COMPARTMENT_NAME</copy>
91
-
> ```
127
+
> You have to change _`COMPARTMENT_NAME`_ for the compartment name you are looking for and run the following command:
128
+
129
+
```bash
130
+
<copy>
131
+
oci iam compartment list --all --compartment-id-in-subtree true --query "data[].id" --name COMPARTMENT_NAME
132
+
</copy>
133
+
```
92
134
93
-
7. Generate a SSH key pair, by default it will create a private key on _`~/.ssh/id_rsa`_ and a public key _`~/.ssh/id_rsa.pub`_.
135
+
10. Generate a SSH key pair, by default it will create a private key on _`~/.ssh/id_rsa`_ and a public key _`~/.ssh/id_rsa.pub`_.
94
136
It will ask to enter the path, a passphrase and confirm again the passphrase; type _[ENTER]_ to continue all three times.
95
137
96
138
```bash
@@ -103,17 +145,24 @@ Estimated Time: 15 minutes
103
145
>```
104
146
> And selectall defaults. Then, try running the command again.
105
147
106
-
8. We need the public key in our notes, so keep the result of the content of the following command in your notes.
148
+
11. We need the public key in our notes, so keep the result of the content of the following commandin your notes.
107
149
108
150
```bash
109
151
<copy>cat ~/.ssh/id_rsa.pub</copy>
110
152
```
111
153
112
154

113
155
114
-
9. You can leave `riotgames_api_key` as it is. We are not using the API key for this specific workshop.
156
+
12. From the previous lab, you should have the Riot Developer API Key.
157
+
158
+

159
+
160
+
Paste the Riot API Key on the `riotgames_api_key` entry of the file.
161
+
162
+

163
+
164
+
13. Save the file.
115
165
116
-
10. Save the file in the Code Editor.
117
166

118
167
119
168
## Task 3: Start Deployment
@@ -125,51 +174,50 @@ Estimated Time: 15 minutes
125
174
```
126
175
127
176
2. The script will run and it looks like this.
177
+
128
178

129
179
130
180
3. Terraform will create resources for you, and during the process it will look like this.
131
-

132
-
133
-
4. The final part of the script is to print the output of all the work done.
134
-

135
-
136
-
5. Copy the Data Science notebook URL from the output variable _`ds_notebook_session`_. This is the URL we will use to connect to our Data Science environment.
137
-

138
181
139
-
> Note: login credentials for the Data Science notebook are the same as the ones used to access Oracle Cloud Infrastructure.
140
-
141
-
## Task 4: Accessing Notebook
182
+

142
183
143
-
Having just created our OCI Data Science environment, we need to install the necessary Python dependencies to execute our code. For that, we'll access our environment.
184
+
4. Ansible will continue the work as part of the `start.sh` script. It looks like this.
144
185
145
-
* The easiest way is to access into the notebook **through the URL** that we previously copied from Terraform's output.
186
+

146
187
147
-

188
+
5. The final part of the script is to print the output of all the work done.
148
189
149
-
If you have done it this way, make sure to **skip through to the next task**.
190
+

150
191
151
-
* (Optionally) We can also access to the notebook via the OCI console, on the top left burger menu:
192
+
6. Copy the ssh command from the output variable `compute`.
152
193
153
-

194
+

154
195
155
-
> You may find the Data Science section by also searching in the top left bar, or in the Analytics & AI tab, if it doesn't appear in "Recently visited" for you:
196
+
## Task 4: Check Deployment
156
197
157
-

198
+
1. Run the `ssh`command from the output of the script. It will look like this.
158
199
159
-
Now, we have access to a [list of our Data Science projects launched within OCI.](https://cloud.oracle.com/data-science/projects) We access our project, and inside our project we'll find the notebook.
200
+
```bash
201
+
<copy>ssh opc@PUBLIC_IP</copy>
202
+
```
160
203
161
-
> The name of the notebook may be different than shown here in the screenshot.
204
+
2. In the new machine, run the python script `check.py` that makes sure everything is working.
162
205
163
-

206
+
```bash
207
+
<copy>python src/check.py</copy>
208
+
```
164
209
165
-

210
+
3. The result will confirm database connection and Riot API works.
166
211
167
-
You should now see the Jupyter environment
212
+

168
213
169
-

214
+
4. If you get an error, make sure the _`terraform/terraform.tfvars`_ file from the previous task contains the correct values. In case of any error, just run the _`start.sh`_ script again.
170
215
171
216
## Task 5: Setting up Data Science Environment
172
217
218
+
Once we have set up our Cloud shell to extract data, we also need to prepare a Data Science environment to work with the data once it's been collected.
219
+
To achieve this, we need to load this workshop's notebook into our environment through the official repository.
220
+
173
221
We now need to load our notebook into our environment.
174
222
175
223
1. Opening a **Terminal** inside the _'Other'_ section the console and re-downloading the repository again:
@@ -184,10 +232,10 @@ We now need to load our notebook into our environment.
184
232
</copy>
185
233
```
186
234
187
-
3. Install the conda environment
235
+
3. Install the conda environment specifying the Python version we want (you can choose between 3.8, 3.9 and 3.10)
188
236
189
237
```bash
190
-
<copy>odsc conda create -n myconda</copy>
238
+
<copy>conda create -n myconda python=3.9</copy>
191
239
```
192
240
193
241

@@ -196,30 +244,32 @@ We now need to load our notebook into our environment.
> Note: make sure to accept prompts by typing 'y' as in 'Yes' when asked.
220
268
221
269
After these commands, all requirements will be fulfilled and we're ready to execute our notebooks with our newly created conda environment.
222
270
271
+
Once we execute any notebook in this Data Science environment, remember that we'll need to select the correct conda environment under the _Kernel_ dropdown menu.
272
+
223
273
## Task 6: Downloading DataSets
224
274
225
275
We now need to load our datasets into our environment. For that, we reuse the terminal we created in the previous step:
@@ -230,8 +280,7 @@ Then, we execute the following command, which will download all necessary datase
@@ -243,17 +292,14 @@ We should now see the repository / files in our file explorer:
243
292
244
293

245
294
246
-

295
+
We navigate to the _`leagueoflegends-optimizer/notebooks/`_ directory and the notebook [_`models_2023.ipynb`_](https://github.com/oracle-devrel/leagueoflegends-optimizer/blob/livelabs/notebooks/models_2023.ipynb) is the one we will review during this workshop.
247
296
248
-
We navigate to the _`leagueoflegends-optimizer/notebooks/`_ directory and the notebook [_`neural_networks_lol.ipynb`_](https://github.com/oracle-devrel/leagueoflegends-optimizer/blob/livelabs/notebooks/neural_networks_lol.ipynb) is the one we will review during this workshop.
249
-
250
-
Let's open both of them and get to work.
251
-
252
-
You may now [proceed to the next lab](#next).
297
+

253
298
299
+
Let's open it. You may now [proceed to the next lab](#next).
254
300
255
301
## Acknowledgements
256
302
257
-
***Author** - Nacho Martinez, Data Science Advocate @ DevRel
258
-
***Contributors** - Victor Martin - Product Strategy Director
259
-
***Last Updated By/Date** - May 31st, 2023
303
+
- **Author** - Nacho Martinez, Data Science Advocate @ DevRel
304
+
- **Contributors** - Victor Martin, Product Strategy Director
0 commit comments