You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/databox/data-box-deploy-ordered.md
+3Lines changed: 3 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -218,6 +218,9 @@ For detailed information on how to sign in to Azure using Windows PowerShell, se
218
218
219
219
## Order Data Box
220
220
221
+
> [!NOTE]
222
+
> Azure Data Box currently does not support Azure Files Provisioned v2 Storage Accounts. For on-premises to Azure migration scenarios, you can explore [Azure Storage Mover](/azure/storage-mover/service-overview).
Copy file name to clipboardExpand all lines: articles/energy-data-services/tutorial-reservoir-ddms-apis.md
+248-5Lines changed: 248 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,9 +15,6 @@ ms.date: 02/12/2025
15
15
16
16
In this article, you learn how to read data from Reservoir DDMS REST APIs with curl commands.
17
17
18
-
> [!IMPORTANT]
19
-
> In the current release, only Reservoir DDMS read APIs are supported.
20
-
21
18
## Prerequisites
22
19
23
20
- Create an Azure Data Manager for Energy resource. See [How to create Azure Data Manager for Energy resource](quickstart-create-microsoft-energy-data-services-instance.md).
@@ -43,6 +40,241 @@ In this article, you learn how to read data from Reservoir DDMS REST APIs with c
43
40
"commitTime": "unknown"
44
41
}
45
42
```
43
+
1. Run the following curl command to create new dataspace.
Consider an Azure Data Manager for Energy resource named `admetest` with a data partition named `dp1`, legal tag named `dp1-RDDMS-Legal-Tag`, valid entitlement group named as `data.default.viewers` and `data.default.owners`. You want to create new data space name `demo/RestWrite`.
@@ -78,4 +78,4 @@ For more information about DDMS, see [DDMS concepts](concepts-ddms.md).
78
78
79
79
## Related content
80
80
* [How to use RDDMS web socket endpoints](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/reservoir/open-etp-server/-/blob/main/docs/testing.md?ref_type=heads)
Copy file name to clipboardExpand all lines: articles/frontdoor/migration-faq.md
+7-5Lines changed: 7 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -30,13 +30,15 @@ There is no rollback support, please reach out to the support team for help if m
30
30
31
31
After migration:
32
32
33
-
- Verify traffic delivery continues to work.
33
+
1. Verify traffic delivery continues to work.
34
34
35
-
- Update the DNS CNAME record for your custom domain to point to the AFD Standard/Premium endpoint (exampledomain-hash.z01.azurefd.net) instead of the classic endpoint (exampledomain.azurefd.net for classic AFD or exampledomain.azureedge.net). Wait for the DNS update propagation until DNS TTL expires, depending on how long TTL is configured on DNS provider.
35
+
1. Update the DNS CNAME record for your custom domain to point to the AFD Standard/Premium endpoint (exampledomain-hash.z01.azurefd.net) instead of the classic endpoint (exampledomain.azurefd.net for classic AFD or exampledomain.azureedge.net). Wait for the DNS update propagation until DNS TTL expires, depending on how long TTL is configured on DNS provider.
36
36
37
-
- Verify again that traffic works in the custom domain.
37
+
1. Verify again that traffic works in the custom domain.
38
38
39
-
- Once confirmed, delete the pseudo custom domain (i.e., the classic endpoint that was pointing to the AFD Standard/Premium endpoint) from the AFD Standard/Premium profile.
39
+
1. Once confirmed, delete the pseudo custom domain (i.e., the classic endpoint that was pointing to the AFD Standard/Premium endpoint) from the AFD Standard/Premium profile.
40
+
41
+
1. Then delete the classic resource.
40
42
41
43
### When I change my DNS CNAME from classic AFD endpoint to AFD standard/premium endpoint, does DNS propagation cause downtime?
42
44
@@ -57,4 +59,4 @@ Yes. After migration, make sure to update your DevOps pipeline to reflect the ne
57
59
* Understand the [settings mapping between Azure Front Door tiers](tier-mapping.md).
58
60
* Learn how to [migrate from Azure Front Door (classic) to Standard or Premium tier](migrate-tier.md) using the Azure portal.
59
61
* Learn how to [migrate from Azure Front Door (classic) to Standard or Premium tier](migrate-tier-powershell.md) using Azure PowerShell.
60
-
* Learn how to [migrate from Azure CDN from Microsoft (classic)](migrate-tier.md) to Azure Front Door using the Azure portal.
62
+
* Learn how to [migrate from Azure CDN from Microsoft (classic)](migrate-tier.md) to Azure Front Door using the Azure portal.
Copy file name to clipboardExpand all lines: articles/migrate/tutorial-discover-mysql-database-instances.md
+7-2Lines changed: 7 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -66,9 +66,14 @@ The following table lists the regions that support MySQL Discovery and Assessmen
66
66
> GRANT PROCESS ON *.* TO 'username@ip';
67
67
> GRANT SELECT (User, Host, Super_priv, File_priv, Create_tablespace_priv, Shutdown_priv) ON mysql.user TO 'username@ip';
68
68
> GRANT SELECT ON information_schema.* TO 'username@ip';
69
-
> GRANT SELECT ON performance_schema.* TO username@ip';
69
+
> GRANT SELECT ON performance_schema.* TO 'username@ip';
70
70
71
-
You can review the discovered MySQL databases after around 24 hours of discovery initiation, through the **Discovered servers** view.
71
+
You can review the discovered MySQL databases after around 24 hours of discovery initiation, through the **Discovered servers** view. To expedite the discovery of your MySQL instances follow the steps:
72
+
73
+
- After adding the MySQL credentials on the appliance configuration manager restart the discovery services on appliance.
74
+
- In your Azure Migrate project navigate to Servers, databases and Web apps blade. On this tab locate Appliances in the right side of Assessment tools section.
75
+
- Select the number projected against total. This will take you to the Appliances blade. Select the appliance where the credentials were added.
76
+
- Select the Refresh services link available at the bottom of the appliance screen. This will restart all the services and MySQL instances will start appearing in the inventory after the refresh.
72
77
73
78
1. On the **Azure Migrate: Discovery and assessment** tile on the Hub page, select the number below the **Discovered servers**.
Copy file name to clipboardExpand all lines: includes/data-box-supported-storage-accounts.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -50,11 +50,12 @@ For export orders, following table shows the supported storage accounts.
50
50
- For General-purpose accounts:
51
51
- For import orders, Data Box doesn't support Queue, Table, and Disk storage types.
52
52
- For export orders, Data Box doesn't support Queue, Table, Disk, and Azure Data Lake Gen2 storage types.
53
+
- For FileStorage Storage accounts, Data Box doesn't support Provisioned v2 accounts.
53
54
- Data Box doesn't support append blobs for Blob Storage and Block Blob Storage accounts.
54
55
- Data uploaded to page blobs must be 512 bytes aligned such as VHDs.
55
56
- For exports:
56
57
- A maximum of 120 or 525 TB can be exported when using Data Box 120 and Data Box 525, respectively.
57
58
- A maximum of 80 TB can be exported when using Data Box.
58
59
- File history and blob snapshots aren't exported.
59
60
- Archive blobs aren't supported for export. Rehydrate the blobs in archive tier before exporting. For more information, see [Rehydrate an archived blob to an online tier](../articles/storage/blobs/archive-rehydrate-overview.md).
60
-
- Data Box only supports block blobs with Azure Data Lake Gen2 Storage accounts. Page blobs are not allowed and should not be uploaded over REST. If page blobs are uploaded over REST, these blobs would fail when data is uploaded to Azure.
61
+
- Data Box only supports block blobs with Azure Data Lake Gen2 Storage accounts. Page blobs aren't allowed and shouldn't be uploaded over REST. If page blobs are uploaded over REST, these blobs would fail when data is uploaded to Azure.
0 commit comments