@@ -111,19 +111,6 @@ You'll create a compute called `cpu-cluster` for your job, with this code:
111
111
112
112
[ !notebook-python[ ] (~ /azureml-examples-sdk-preview/sdk/jobs/configuration.ipynb?name=create-cpu-compute)]
113
113
114
- ``` python
115
- from azure.ai.ml.entities import AmlCompute
116
-
117
- # specify aml compute name.
118
- cpu_compute_target = ' cpu-cluster'
119
-
120
- try :
121
- ml_client.compute.get(cpu_compute_target)
122
- except Exception :
123
- print (' Creating a new cpu compute target...' )
124
- compute = AmlCompute(name = cpu_compute_target, size = " STANDARD_D2_V2" , min_instances = 0 , max_instances = 4 )
125
- ml_client.compute.begin_create_or_update(compute)
126
- ```
127
114
128
115
### 3. Environment to run the script
129
116
@@ -145,26 +132,8 @@ To run this script, you'll use a `command`. The command will be run by submittin
145
132
146
133
[ !notebook-python[ ] (~ /azureml-examples-sdk-preview/sdk/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb?name=create-command)]
147
134
148
- ``` python
149
- from azure.ai.ml import command, Input
150
- # define the command
151
- command_job= command(
152
- code = ' ./src' ,
153
- inputs = {' iris_csv' :Input(type = ' uri_file' , path = ' https://azuremlexamples.blob.core.windows.net/datasets/iris.csv' )},
154
- command = ' python main.py --iris-csv ${{ inputs.iris_csv}} ' ,
155
- environment = ' AzureML-lightgbm-3.2-ubuntu18.04-py37-cpu@latest' ,
156
- compute = ' cpu-cluster'
157
- )
158
- ```
159
-
160
135
[ !notebook-python[ ] (~ /azureml-examples-sdk-preview/sdk/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb?name=run-command)]
161
136
162
- ``` python
163
- # submit the command
164
- returned_job = ml_client.jobs.create_or_update(command_job)
165
- # get a URL for the status of the job
166
- returned_job.services[" Studio" ].endpoint
167
- ```
168
137
169
138
In the above, you configured:
170
139
- ` code ` - path where the code to run the command is located
@@ -183,40 +152,13 @@ Let us improve our model by sweeping on `learning_rate` and `boosting` inputs to
183
152
184
153
[ !notebook-python[ ] (~ /azureml-examples-sdk-preview/sdk/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb?name=search-space)]
185
154
186
- ``` python
187
- # we will reuse the command_job created before. we call it as a function so that we can apply inputs
188
- # we do not apply the 'iris_csv' input again -- we will just use what was already defined earlier
189
- command_job_for_sweep = command_job(
190
- learning_rate = Uniform(min_value = 0.01 , max_value = 0.9 ),
191
- boosting = Choice(values = [" gbdt" , " dart" ]),
192
- )
193
- ```
194
155
195
156
Now that you've defined the parameters, run the sweep
196
157
197
158
[ !notebook-python[ ] (~ /azureml-examples-sdk-preview/sdk/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb?name=configure-sweep)]
198
159
199
- ``` python
200
- # apply the sweep parameter to obtain the sweep_job
201
- sweep_job = command_job_for_sweep.sweep(
202
- compute = ' cpu-cluster' ,
203
- sampling_algorithm = ' random' ,
204
- primary_metric = ' test-multi_logloss' ,
205
- goal = ' Minimize'
206
- )
207
-
208
- # define the limits for this sweep
209
- sweep_job.set_limits(max_total_trials = 20 , max_concurrent_trials = 10 , timeout = 7200 )
210
- ```
211
-
212
160
[ !notebook-python[ ] (~ /azureml-examples-sdk-preview/sdk/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb?name=run-sweep)]
213
161
214
- ``` python
215
- # submit the sweep
216
- returned_sweep_job = ml_client.create_or_update(sweep_job)
217
- # get a URL for the status of the job
218
- returned_sweep_job.services[" Studio" ].endpoint
219
- ```
220
162
221
163
As seen above, the ` sweep ` function allows user to configure the following key aspects:
222
164
0 commit comments