You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
*[Fine Tune LLAMA 2 using multiple nodes](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/system/finetune/Llama-notebooks/multinode-text-classification/emotion-detection-llama-multinode-serverless.ipynb)
48
48
* When you create your own compute cluster, you use its name in the command job, such as `compute="cpu-cluster"`. With serverless, you can skip creation of a compute cluster, and omit the `compute` parameter to instead use serverless compute. When `compute` isn't specified for a job, the job runs on serverless compute. Omit the compute name in your CLI or SDK jobs to use serverless compute in the following job types and optionally provide resources a job would need in terms of instance count and instance type:
49
49
50
50
* Command jobs, including interactive jobs and distributed training
@@ -223,7 +223,7 @@ You can override these defaults. If you want to specify the VM type or number o
223
223
from azure.ai.ml import command
224
224
from azure.ai.ml import MLClient # Handle to the workspace
225
225
from azure.identity import DefaultAzureCredential # Authentication package
226
-
from azure.ai.ml.entities importResourceConfiguration
226
+
from azure.ai.ml.entities importJobResourceConfiguration
227
227
228
228
credential= DefaultAzureCredential()
229
229
# Get a handle to the workspace. You can find the info on the workspace tab on ml.azure.com
@@ -236,7 +236,7 @@ You can override these defaults. If you want to specify the VM type or number o
0 commit comments