Skip to content

Commit 3a0d455

Browse files
Merge pull request #304016 from MicrosoftDocs/main
Auto Publish – main to live - 2025-08-08 11:00 UTC
2 parents 9a74eab + c8b51fe commit 3a0d455

7 files changed

+270
-16
lines changed

articles/databox/data-box-deploy-ordered.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -218,6 +218,9 @@ For detailed information on how to sign in to Azure using Windows PowerShell, se
218218

219219
## Order Data Box
220220

221+
> [!NOTE]
222+
> Azure Data Box currently does not support Azure Files Provisioned v2 Storage Accounts. For on-premises to Azure migration scenarios, you can explore [Azure Storage Mover](/azure/storage-mover/service-overview).
223+
221224
To order a device, perform the following steps:
222225

223226
# [Portal](#tab/portal)

articles/energy-data-services/tutorial-reservoir-ddms-apis.md

Lines changed: 248 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -15,9 +15,6 @@ ms.date: 02/12/2025
1515

1616
In this article, you learn how to read data from Reservoir DDMS REST APIs with curl commands.
1717

18-
> [!IMPORTANT]
19-
> In the current release, only Reservoir DDMS read APIs are supported.
20-
2118
## Prerequisites
2219

2320
- Create an Azure Data Manager for Energy resource. See [How to create Azure Data Manager for Energy resource](quickstart-create-microsoft-energy-data-services-instance.md).
@@ -43,6 +40,241 @@ In this article, you learn how to read data from Reservoir DDMS REST APIs with c
4340
"commitTime": "unknown"
4441
}
4542
```
43+
1. Run the following curl command to create new dataspace.
44+
```bash
45+
curl --request POST \
46+
--url https://<adme_url>/api/reservoir-ddms/v2/dataspaces \
47+
--header 'Authorization: Bearer <access-token>' \
48+
--header 'Content-Type: application/json' \
49+
--header 'data-partition-id: <data-partition-id>' \
50+
--data '[
51+
{
52+
"DataspaceId": "<dataspace_name>",
53+
"Path": "<dataspace_name>",
54+
"CustomData": {
55+
"legaltags": ["<legal_tag_name>"],
56+
"otherRelevantDataCountries": ["<country_code1>","country_code2"],
57+
"viewers": [ "<valid_entitlement_group1>@<data-partition-id>.dataservices.energy" ],
58+
"owners": [ "<valid_entitlement_group2>@<data-partition-id>.dataservices.energy"]
59+
}
60+
}
61+
]'
62+
```
63+
**Sample Request**
64+
65+
Consider an Azure Data Manager for Energy resource named `admetest` with a data partition named `dp1`, legal tag named `dp1-RDDMS-Legal-Tag`, valid entitlement group named as `data.default.viewers` and `data.default.owners`. You want to create new data space name `demo/RestWrite`.
66+
67+
```bash
68+
curl --request GET \
69+
--url https://admetest.energy.azure.com/api/reservoir-ddms/v2/dataspaces/demo%2FVolve/resources \
70+
--header 'Authorization: Bearer ey.......' \
71+
--header 'Content-Type: application/json' \
72+
--header 'data-partition-id: dp1' \
73+
--data '[
74+
{
75+
"DataspaceId": "demo/RestWrite",
76+
"Path": "demo/RestWrite",
77+
"CustomData": {
78+
"legaltags": ["dp1-RDDMS-Legal-Tag"],
79+
"otherRelevantDataCountries": ["US"],
80+
"viewers": [ "[email protected]" ],
81+
"owners": [ "[email protected]"]
82+
}
83+
}
84+
]'
85+
```
86+
**Sample Response:**
87+
```json
88+
[
89+
"eml:///dataspace('demo/RestWrite')"
90+
]
91+
```
92+
1. Run the following curl command to start a transaction.
93+
```bash
94+
curl --request POST \
95+
--url https://<adme_url>/api/reservoir-ddms/v2/dataspaces/<encoded_dataspace_id>/transactions \
96+
--header 'Authorization: Bearer <access-token>' \
97+
--header 'data-partition-id: <data-partition-id>'
98+
```
99+
**Sample Response:**
100+
```bash
101+
3f71e12a-7b05-41c1-851d-2c59498832d4
102+
```
103+
**Example of encoded dataspace id:**
104+
```bash
105+
Dataspace name: "demo/RestWrite"
106+
Encoded dataspace name: "demo%2FRestWrite"
107+
```
108+
109+
1. Run the following curl command to add resources using transaction ID.
110+
```bash
111+
curl --request PUT \
112+
--url 'https://<adme_url>/api/reservoir-ddms/v2/dataspaces/<encoded_dataspace_id>/resources?transactionId=<transaction_id>' \
113+
--header 'Authorization: Bearer <access-token>' \
114+
--header 'Content-Type: application/json' \
115+
--header 'data-partition-id: <data-partition-id>' \
116+
--data '[
117+
{
118+
"Citation": {
119+
"Title": "CustomTestCrs",
120+
"Originator": "dalsaab",
121+
"Creation": "2021-09-02T07:57:28.000Z",
122+
"Format": "Paradigm SKUA-GOCAD 22 Alpha 1 Build:20210830-0200 (id: origin/master|56050|1fb1cf919c2|20210827-1108) for Linux_x64_2.17_gcc91",
123+
"Editor": "dalsaab",
124+
"LastUpdate": "2021-09-06T13:30:24.000Z"
125+
},
126+
"YOffset": 6470000,
127+
"ZOffset": 0,
128+
"ArealRotation": {
129+
"_": 0,
130+
"$type": "eml20.PlaneAngleMeasure",
131+
"Uom": "rad"
132+
},
133+
"ProjectedAxisOrder": "easting northing",
134+
"ProjectedUom": "m",
135+
"VerticalUom": "m",
136+
"XOffset": 420000,
137+
"ZIncreasingDownward": true,
138+
"VerticalCrs": {
139+
"EpsgCode": 6230,
140+
"$type": "eml20.VerticalCrsEpsgCode"
141+
},
142+
"ProjectedCrs": {
143+
"EpsgCode": 23031,
144+
"$type": "eml20.ProjectedCrsEpsgCode"
145+
},
146+
"$type": "resqml20.obj_LocalDepth3dCrs",
147+
"SchemaVersion": "2.0",
148+
"Uuid": "7c7d7987-b7b9-4215-9014-cb7d6fb62173"
149+
},
150+
{
151+
"Citation": {
152+
"$type": "eml20.Citation",
153+
"Title": "Hdf Proxy",
154+
"Originator": "Mathieu",
155+
"Creation": "2014-09-09T15:33:25Z",
156+
"Format": "[F2I-CONSULTING:resqml2CppApi]"
157+
},
158+
"MimeType": "application/x-hdf5",
159+
"$type": "eml20.obj_EpcExternalPartReference",
160+
"SchemaVersion": "2.0.0.20140822",
161+
"Uuid": "68f2a7d4-f7c1-4a75-95e9-3c6a7029fb23"
162+
},
163+
{
164+
"Citation": {
165+
"Title": "Pointset 1",
166+
"Originator": "user1",
167+
"Creation": "2019-01-08T13:41:25.000Z",
168+
"Format": "Paradigm SKUA-GOCAD 22 Alpha 1 Build:20210830-0200 (id: origin/master|56050|1fb1cf919c2|20210827-1108) for Linux_x64_2.17_gcc91",
169+
"$type": "eml20.Citation"
170+
},
171+
"ExtraMetadata": [
172+
{
173+
"Name": "pdgm/dx/resqml/creatorGroup",
174+
"Value": "Interpreters",
175+
"$type": "resqml20.NameValuePair"
176+
}
177+
],
178+
"NodePatch": [
179+
{
180+
"PatchIndex": 0,
181+
"Count": 6,
182+
"Geometry": {
183+
"$type": "resqml20.PointGeometry",
184+
"LocalCrs": {
185+
"$type": "eml20.DataObjectReference",
186+
"ContentType": "application/x-resqml+xml;version=2.0;type=obj_LocalDepth3dCrs",
187+
"Title": "CustomTestCrs",
188+
"UUID": "7c7d7987-b7b9-4215-9014-cb7d6fb62173"
189+
},
190+
"Points": {
191+
"$type": "resqml20.Point3dHdf5Array",
192+
"Coordinates": {
193+
"$type": "eml20.Hdf5Dataset",
194+
"PathInHdfFile": "/RESQML/5d27775e-5c7f-4786-a048-9a303fa1165a/points_patch0",
195+
"HdfProxy": {
196+
"$type": "eml20.DataObjectReference",
197+
"ContentType": "application/x-resqml+xml;version=2.0;type=obj_EpcExternalPartReference",
198+
"UUID": "68f2a7d4-f7c1-4a75-95e9-3c6a7029fb23",
199+
"DescriptionString": "Hdf Proxy",
200+
"VersionString": "1410276805"
201+
}
202+
}
203+
}
204+
}
205+
}
206+
],
207+
"$type": "resqml20.obj_PointSetRepresentation",
208+
"SchemaVersion": "2.0.0.20140822",
209+
"Uuid": "5d27775e-5c7f-4786-a048-9a303fa1165a"
210+
}
211+
]'
212+
```
213+
**Sample Response:**
214+
```json
215+
true
216+
```
217+
1. Run the following curl command to add arrays using transaction ID.
218+
```bash
219+
curl --request PUT \
220+
--url 'https://<adme_url>/api/reservoir-ddms/v2/dataspaces/<encoded_dataspace_id>/resources/arrays?transactionId=<transaction_id>' \
221+
--header 'Authorization: Bearer <access-token>' \
222+
--header 'Content-Type: application/json' \
223+
--header 'data-partition-id: <data-partition-id>' \
224+
--data '[
225+
{
226+
"ContainerType": "eml20.obj_EpcExternalPartReference",
227+
"ContainerUuid": "68f2a7d4-f7c1-4a75-95e9-3c6a7029fb23",
228+
"PathInResource": "/RESQML/5d27775e-5c7f-4786-a048-9a303fa1165a/points_patch0",
229+
"Dimensions": [
230+
3,
231+
6
232+
],
233+
"PreferredSubarrayDimensions": [
234+
3,
235+
1
236+
],
237+
"Data": [
238+
0,0,0,
239+
1,0,0,
240+
0,1,2,
241+
1,1,2,
242+
1,0,2,
243+
1,1,1
244+
],
245+
"ArrayType": "Float32Array"
246+
}
247+
]'
248+
```
249+
**Sample Response:**
250+
```json
251+
[
252+
true
253+
]
254+
```
255+
1. Run the following curl command to commit a transaction.
256+
```bash
257+
curl --request PUT \
258+
--url https://<adme_url>/api/reservoir-ddms/v2/dataspaces/<encoded_dataspace_id>/transactions/<transaction_id> \
259+
--header 'Authorization: Bearer <access-token>' \
260+
--header 'data-partition-id: <data-partition-id>'
261+
```
262+
**Sample Response:**
263+
```json
264+
true
265+
```
266+
1. Run the following curl command to rollback a transaction.
267+
```bash
268+
curl --request DELETE \
269+
--url https://<adme_url>/api/reservoir-ddms/v2/dataspaces/<encoded_dataspace_id>/transactions/<transaction_id> \
270+
--header 'Authorization: Bearer <access-token>' \
271+
--header 'data-partition-id: <data-partition-id>'
272+
```
273+
**Sample Response:**
274+
```json
275+
true
276+
```
277+
46278
1. Run the following curl command to list all the dataspaces.
47279

48280
```bash
@@ -152,7 +384,7 @@ In this article, you learn how to read data from Reservoir DDMS REST APIs with c
152384
```bash
153385
curl --request GET \
154386
--url https://<adme_url>/api/reservoir-ddms/v2/dataspaces/<dataspace_name>/resources/all \
155-
--header 'Authorization: Bearer bearer' \
387+
--header 'Authorization: Bearer <access-token>' \
156388
--header 'data-partition-id: <data-partition-id>'
157389
```
158390
**Sample Response**
@@ -517,6 +749,17 @@ In this article, you learn how to read data from Reservoir DDMS REST APIs with c
517749
}
518750
]
519751
```
752+
1. Run the following curl command to delete a dataspace.
753+
```bash
754+
curl --request DELETE \
755+
--url https://<adme_url>/api/reservoir-ddms/v2/dataspaces/<encoded_dataspace_id> \
756+
--header 'Authorization: Bearer <access-token>' \
757+
--header 'data-partition-id: <data-partition-id>'
758+
```
759+
**Sample Response:**
760+
```bash
761+
762+
```
520763

521764
## Related content
522-
[Tutorial: Use Reservoir DDMS websocket API endpoints](tutorial-reservoir-ddms-websocket.md)
765+
[Tutorial: Use Reservoir DDMS websocket API endpoints](tutorial-reservoir-ddms-websocket.md)

articles/energy-data-services/tutorial-reservoir-ddms-websocket.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ For more information about DDMS, see [DDMS concepts](concepts-ddms.md).
5454
1. Create the data space:
5555

5656
```bash
57-
docker run -it --rm open-etp:ssl-client openETPServer space -S wss://${RDDMS_URL} --new -s <data_space_name> --data-partition-id ${PARTITION} --auth bearer --jwt-token ${TOKEN}
57+
docker run -it --rm open-etp:ssl-client openETPServer space -S wss://${RDDMS_URL} --new -s <data_space_name> --data-partition-id ${PARTITION} --auth bearer --jwt-token ${TOKEN} --xdata "{\"viewers\":[\"data.default.viewers@<data_partition_name>.dataservices.energy\"],\"owners\":[\"data.default.owners@<data_partition_name>.dataservices.energy\"],\"legaltags\":\"<legal_tag_name>\",\"otherRelevantDataCountries\":[\"<country_code1\", \"country_code2\"]}"
5858
```
5959
1. Get the data space:
6060

@@ -78,4 +78,4 @@ For more information about DDMS, see [DDMS concepts](concepts-ddms.md).
7878

7979
## Related content
8080
* [How to use RDDMS web socket endpoints](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/reservoir/open-etp-server/-/blob/main/docs/testing.md?ref_type=heads)
81-
* [Use Reservoir DDMS APIs](tutorial-reservoir-ddms-apis.md)
81+
* [Use Reservoir DDMS APIs](tutorial-reservoir-ddms-apis.md)

articles/frontdoor/migration-faq.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -30,13 +30,15 @@ There is no rollback support, please reach out to the support team for help if m
3030

3131
After migration:
3232

33-
- Verify traffic delivery continues to work.
33+
1. Verify traffic delivery continues to work.
3434

35-
- Update the DNS CNAME record for your custom domain to point to the AFD Standard/Premium endpoint (exampledomain-hash.z01.azurefd.net) instead of the classic endpoint (exampledomain.azurefd.net for classic AFD or exampledomain.azureedge.net). Wait for the DNS update propagation until DNS TTL expires, depending on how long TTL is configured on DNS provider.
35+
1. Update the DNS CNAME record for your custom domain to point to the AFD Standard/Premium endpoint (exampledomain-hash.z01.azurefd.net) instead of the classic endpoint (exampledomain.azurefd.net for classic AFD or exampledomain.azureedge.net). Wait for the DNS update propagation until DNS TTL expires, depending on how long TTL is configured on DNS provider.
3636

37-
- Verify again that traffic works in the custom domain.
37+
1. Verify again that traffic works in the custom domain.
3838

39-
- Once confirmed, delete the pseudo custom domain (i.e., the classic endpoint that was pointing to the AFD Standard/Premium endpoint) from the AFD Standard/Premium profile.
39+
1. Once confirmed, delete the pseudo custom domain (i.e., the classic endpoint that was pointing to the AFD Standard/Premium endpoint) from the AFD Standard/Premium profile.
40+
41+
1. Then delete the classic resource. 
4042

4143
### When I change my DNS CNAME from classic AFD endpoint to AFD standard/premium endpoint, does DNS propagation cause downtime?
4244

@@ -57,4 +59,4 @@ Yes. After migration, make sure to update your DevOps pipeline to reflect the ne
5759
* Understand the [settings mapping between Azure Front Door tiers](tier-mapping.md).
5860
* Learn how to [migrate from Azure Front Door (classic) to Standard or Premium tier](migrate-tier.md) using the Azure portal.
5961
* Learn how to [migrate from Azure Front Door (classic) to Standard or Premium tier](migrate-tier-powershell.md) using Azure PowerShell.
60-
* Learn how to [migrate from Azure CDN from Microsoft (classic)](migrate-tier.md) to Azure Front Door using the Azure portal.
62+
* Learn how to [migrate from Azure CDN from Microsoft (classic)](migrate-tier.md) to Azure Front Door using the Azure portal.

articles/migrate/best-practices-least-privileged-account.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -370,7 +370,7 @@ GRANT USAGE ON *.* TO 'username@ip';
370370
GRANT PROCESS ON *.* TO 'username@ip';
371371
GRANT SELECT (User, Host, Super_priv, File_priv, Create_tablespace_priv, Shutdown_priv) ON mysql.user TO 'username@ip';
372372
GRANT SELECT ON information_schema.* TO 'username@ip';
373-
GRANT SELECT ON performance_schema.* TO username@ip';
373+
GRANT SELECT ON performance_schema.* TO 'username@ip';
374374

375375
```
376376

articles/migrate/tutorial-discover-mysql-database-instances.md

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -66,9 +66,14 @@ The following table lists the regions that support MySQL Discovery and Assessmen
6666
> GRANT PROCESS ON *.* TO 'username@ip';
6767
> GRANT SELECT (User, Host, Super_priv, File_priv, Create_tablespace_priv, Shutdown_priv) ON mysql.user TO 'username@ip';
6868
> GRANT SELECT ON information_schema.* TO 'username@ip';
69-
> GRANT SELECT ON performance_schema.* TO username@ip';
69+
> GRANT SELECT ON performance_schema.* TO 'username@ip';
7070
71-
You can review the discovered MySQL databases after around 24 hours of discovery initiation, through the **Discovered servers** view.
71+
You can review the discovered MySQL databases after around 24 hours of discovery initiation, through the **Discovered servers** view. To expedite the discovery of your MySQL instances follow the steps:
72+
73+
- After adding the MySQL credentials on the appliance configuration manager restart the discovery services on appliance.
74+
- In your Azure Migrate project navigate to Servers, databases and Web apps blade. On this tab locate Appliances in the right side of Assessment tools section.
75+
- Select the number projected against total. This will take you to the Appliances blade. Select the appliance where the credentials were added.
76+
- Select the Refresh services link available at the bottom of the appliance screen. This will restart all the services and MySQL instances will start appearing in the inventory after the refresh.
7277
7378
1. On the **Azure Migrate: Discovery and assessment** tile on the Hub page, select the number below the **Discovered servers**.
7479

includes/data-box-supported-storage-accounts.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,11 +50,12 @@ For export orders, following table shows the supported storage accounts.
5050
- For General-purpose accounts:
5151
- For import orders, Data Box doesn't support Queue, Table, and Disk storage types.
5252
- For export orders, Data Box doesn't support Queue, Table, Disk, and Azure Data Lake Gen2 storage types.
53+
- For FileStorage Storage accounts, Data Box doesn't support Provisioned v2 accounts.
5354
- Data Box doesn't support append blobs for Blob Storage and Block Blob Storage accounts.
5455
- Data uploaded to page blobs must be 512 bytes aligned such as VHDs.
5556
- For exports:
5657
- A maximum of 120 or 525 TB can be exported when using Data Box 120 and Data Box 525, respectively.
5758
- A maximum of 80 TB can be exported when using Data Box.
5859
- File history and blob snapshots aren't exported.
5960
- Archive blobs aren't supported for export. Rehydrate the blobs in archive tier before exporting. For more information, see [Rehydrate an archived blob to an online tier](../articles/storage/blobs/archive-rehydrate-overview.md).
60-
- Data Box only supports block blobs with Azure Data Lake Gen2 Storage accounts. Page blobs are not allowed and should not be uploaded over REST. If page blobs are uploaded over REST, these blobs would fail when data is uploaded to Azure.
61+
- Data Box only supports block blobs with Azure Data Lake Gen2 Storage accounts. Page blobs aren't allowed and shouldn't be uploaded over REST. If page blobs are uploaded over REST, these blobs would fail when data is uploaded to Azure.

0 commit comments

Comments
 (0)