Skip to content

Commit e28c07b

Browse files
committed
Clarify blob requirements
1 parent 805d502 commit e28c07b

File tree

5 files changed

+7
-9
lines changed

5 files changed

+7
-9
lines changed

articles/digital-twins/concepts-apis-sdks.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -142,8 +142,9 @@ Once the file has been created, upload it to a block blob in Azure Blob Storage
142142

143143
Now you can proceed with calling the [Jobs API](/rest/api/digital-twins/dataplane/import-jobs). For detailed instructions on importing a full graph in one API call, see [Upload models, twins, and relationships in bulk with the Jobs API](how-to-manage-graph.md#upload-models-twins-and-relationships-in-bulk-with-the-jobs-api). You can also use the Jobs API to import each resource type independently. For more information on using the Jobs API with individual resource types, see Jobs API instructions for [models](how-to-manage-model.md#upload-large-model-sets-with-the-jobs-api), [twins](how-to-manage-twin.md#create-twins-in-bulk-with-the-jobs-api), and [relationships](how-to-manage-graph.md#create-relationships-in-bulk-with-the-jobs-api).
144144

145-
In the body of the API call, you'll provide the blob storage URL of the NDJSON input file, as well as another blob storage URL for where you'd like the output log to be stored.
146-
As the import job executes, a structured output log is generated by the service and stored as a new append blob in your blob container, according to the output blob URL and name you provided. Here's an example output log for a successful job importing models, twins, and relationships:
145+
In the body of the API call, you'll provide the blob storage URL of the NDJSON input file. You'll also provide a new blob storage URL to indicate where you'd like the output log to be stored once the service creates it.
146+
147+
As the import job executes, a structured output log is generated by the service and stored as a new append blob in your blob container, at the URL location you specified for the output blob in the request. Here's an example output log for a successful job importing models, twins, and relationships:
147148

148149
```json
149150
{"timestamp":"2022-12-30T19:50:34.5540455Z","jobId":"test1","jobType":"Import","logType":"Info","details":{"status":"Started"}}

articles/digital-twins/how-to-manage-graph.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -89,7 +89,7 @@ You can view an example import file and a sample project for creating these file
8989

9090
[!INCLUDE [digital-twins-bulk-blob.md](../../includes/digital-twins-bulk-blob.md)]
9191

92-
Then, the file can be used in an [Jobs API](/rest/api/digital-twins/dataplane/import-jobs) call.
92+
Then, the file can be used in an [Jobs API](/rest/api/digital-twins/dataplane/import-jobs) call. You'll provide the blob storage URL of the input file, as well as a new blob storage URL to indicate where you'd like the output log to be stored when it's created by the service.
9393

9494
## List relationships
9595

@@ -188,7 +188,7 @@ You can view an example import file and a sample project for creating these file
188188

189189
[!INCLUDE [digital-twins-bulk-blob.md](../../includes/digital-twins-bulk-blob.md)]
190190

191-
Then, the file can be used in an [Jobs API](/rest/api/digital-twins/dataplane/import-jobs) call.
191+
Then, the file can be used in an [Jobs API](/rest/api/digital-twins/dataplane/import-jobs) call. You'll provide the blob storage URL of the input file, as well as a new blob storage URL to indicate where you'd like the output log to be stored when it's created by the service.
192192

193193
### Import graph with Azure Digital Twins Explorer
194194

articles/digital-twins/how-to-manage-model.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@ To import models in bulk, you'll need to structure your models (and any other re
8888

8989
[!INCLUDE [digital-twins-bulk-blob.md](../../includes/digital-twins-bulk-blob.md)]
9090

91-
Then, the file can be used in an [Jobs API](/rest/api/digital-twins/dataplane/import-jobs) call.
91+
Then, the file can be used in an [Jobs API](/rest/api/digital-twins/dataplane/import-jobs) call. You'll provide the blob storage URL of the input file, as well as a new blob storage URL to indicate where you'd like the output log to be stored when it's created by the service.
9292

9393
## Retrieve models
9494

articles/digital-twins/how-to-manage-twin.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -93,7 +93,7 @@ You can view an example import file and a sample project for creating these file
9393
9494
[!INCLUDE [digital-twins-bulk-blob.md](../../includes/digital-twins-bulk-blob.md)]
9595
96-
Then, the file can be used in an [Jobs API](/rest/api/digital-twins/dataplane/import-jobs) call.
96+
Then, the file can be used in an [Jobs API](/rest/api/digital-twins/dataplane/import-jobs) call. You'll provide the blob storage URL of the input file, as well as a new blob storage URL to indicate where you'd like the output log to be stored when it's created by the service.
9797
9898
## Get data for a digital twin
9999

includes/digital-twins-bulk-blob.md

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -9,9 +9,6 @@ ms.author: baanders
99

1010
Next, the file needs to be uploaded into an append blob in [Azure Blob Storage](../articles/storage/blobs/storage-blobs-introduction.md). For instructions on how to create an Azure storage container, see [Create a container](../articles/storage/blobs/storage-quickstart-blobs-portal.md#create-a-container). Then, upload the file using your preferred upload method (some options are the [AzCopy command](../articles/storage/common/storage-use-azcopy-blobs-upload.md), the [Azure CLI](../articles/storage/blobs/storage-quickstart-blobs-cli.md#upload-a-blob), or the [Azure portal](https://portal.azure.com)).
1111

12-
>[!IMPORTANT]
13-
> The Azure Blob Storage container must have an **Append** blob type, so that the bulk import job can write to output logs.
14-
1512
Once the NDJSON file has been uploaded to the container, get its **URL** within the blob container. You'll use this value later in the body of the bulk import API call.
1613

1714
Here's a screenshot showing the URL value of a blob file in the Azure portal:

0 commit comments

Comments
 (0)