You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
### Using pre-labeled training data from local machine
78
+
If you have previously labeled data that you would like to use to train your model, you will first need to upload the images to the default Azure Blob Storage of your Azure ML Workspace and register it as a [data asset](how-to-create-data-assets.md).
77
79
78
-
## Using pre-labeled training data
79
-
If you have previously labeled data that you would like to use to train your model, you will first need to upload the images to the default Azure Blob Storage of your Azure ML Workspace and register it as a data asset.
80
+
The following script uploads the image data on your local machine at path "./data/odFridgeObjects" to datastore in Azure Blob Storage. It then creates a new data asset with the name "fridge-items-images-object-detection" in your Azure ML Workspace.
81
+
82
+
If there already exists a data asset with the name "fridge-items-images-object-detection" in your Azure ML Workspace, it will update the version number of the data asset and point it to the new location where the image data uploaded.
If you already have your data present in an existing datastore and want to create a data asset out of it, you can do so by providing the path to the data in the datastore, instead of providing the path of your local machine. Update the code [above](how-to-prepare-datasets-for-automl-images.md#using-pre-labeled-training-data-from-local-machine) with the following snippet.
Next, you will need to get the label annotations in JSONL format. The schema of labeled data depends on the computer vision task at hand. Refer to [schemas for JSONL files for AutoML computer vision experiments](reference-automl-images-schema.md) to learn more about the required JSONL schema for each task type.
108
139
109
140
If your training data is in a different format (like, pascal VOC or COCO), [helper scripts](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection/coco2jsonl.py) to convert the data to JSONL are available in [notebook examples](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/sdk/python/jobs/automl-standalone-jobs).
110
141
142
+
### Using pre-labeled training data from Azure Blob storage
143
+
If you have your labeled training data present in a container in Azure Blob storage, then you can access it directly from there by [creating a datastore referring to that container](how-to-datastore.md#create-an-azure-blob-datastore).
111
144
112
-
###Create MLTable
145
+
## Create MLTable
113
146
114
147
Once you have your labeled data in JSONL format, you can use it to create `MLTable` as shown below. MLtable packages your data into a consumable object for training.
115
148
@@ -121,4 +154,4 @@ You can then pass in the `MLTable` as a [data input for your AutoML training job
121
154
122
155
*[Train computer vision models with automated machine learning](how-to-auto-train-image-models.md).
123
156
*[Train a small object detection model with automated machine learning](how-to-use-automl-small-object-detect.md).
124
-
*[Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md).
157
+
*[Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md).
0 commit comments