You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
description: This article provides information on how to view automation limits and request for quota increase or decrease.
4
4
services: automation
5
5
ms.topic: how-to
6
-
ms.date: 01/06/2025
6
+
ms.date: 01/28/2025
7
7
ms.service: azure-automation
8
8
---
9
9
@@ -50,6 +50,23 @@ Follow the steps to view your current quota and request for changes in quota:
50
50
> [!NOTE]
51
51
> Quota increases are subject to availability of resources in the selected region.
52
52
53
+
## View current limits and request for increase on Quotas page
54
+
55
+
You can also view your current usage and request for quota increase/decrease for number of Automation accounts per subscription on Quotas page on Azure portal. This capability isn't currently available for viewing number of concurrently running jobs in your Automation account.
56
+
57
+
Follow these steps to view current limits and request for quota increase:
58
+
59
+
1. Go to [My Quotas](https://ms.portal.azure.com/#view/Microsoft_Azure_Capacity/QuotaMenuBlade/~/myQuotas) page and choose provider **Automation accounts**. The filter options at the top of the page allow you to filter by location, subscription, and usage.
60
+
1. You can view your current usage and limit on the number of Automation accounts.
61
+
62
+
:::image type="content" source="./media/automation-limits-quotas/view-current-usage.png" alt-text="Screenshot showing how to view current usage.":::
63
+
64
+
1. Select the pencil icon in the **Request adjustment** column to request for additional quota.
65
+
1. In the **New Quota request** pane, enter the **New limit** for number of Automation accounts based on your business requirement.
66
+
1. Select **Submit**. It may take few minutes to process your request.
67
+
- If your request is rejected, select **Create a Support request**. Some fields are auto-populated. Complete the remaining details in Support request and submit it.
68
+
69
+
53
70
## Next steps
54
71
55
72
Learn more on [the default quotas or limits offered to different resources in Azure Automation](automation-subscription-limits-faq.md).
Copy file name to clipboardExpand all lines: articles/azure-cache-for-redis/cache-tutorial-vector-similarity.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -42,7 +42,7 @@ In this tutorial, you learn how to:
42
42
* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services?azure-portal=true)
43
43
* Access granted to Azure OpenAI in the desired Azure subscription
44
44
Currently, you must apply for access to Azure OpenAI. You can apply for access to Azure OpenAI by completing the form at <ahref="https://aka.ms/oai/access"target="_blank">https://aka.ms/oai/access</a>.
45
-
* <ahref="https://www.python.org/"target="_blank">Python 3.7.1 or later version</a>
45
+
* <ahref="https://www.python.org/"target="_blank">Python 3.8 or later version</a>
* An Azure OpenAI resource with the **text-embedding-ada-002 (Version 2)** model deployed. This model is currently only available in [certain regions](/azure/ai-services/openai/concepts/models#model-summary-table-and-region-availability). See the [resource deployment guide](/azure/ai-services/openai/how-to/create-resource) for instructions on how to deploy the model.
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/backup-requirements-considerations.md
+4-2Lines changed: 4 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ services: azure-netapp-files
5
5
author: b-hchen
6
6
ms.service: azure-netapp-files
7
7
ms.topic: conceptual
8
-
ms.date: 08/13/2024
8
+
ms.date: 01/27/2025
9
9
ms.author: anfdocs
10
10
---
11
11
# Requirements and considerations for Azure NetApp Files backup
@@ -24,7 +24,7 @@ Azure NetApp Files backup in a region can only protect an Azure NetApp Files vol
24
24
25
25
* There can be a delay of up to 5 minutes in displaying a backup after the backup is actually completed.
26
26
27
-
* For volumes larger than 10 TB, it can take multiple hours to transfer all the data from the backup media.
27
+
* For volumes larger than 10 TiB, it can take multiple hours to transfer all the data from the backup media.
28
28
29
29
* The Azure NetApp Files backup feature supports backing up the daily, weekly, and monthly local snapshots to the Azure storage. Hourly backups aren't currently supported.
30
30
@@ -36,6 +36,8 @@ Azure NetApp Files backup in a region can only protect an Azure NetApp Files vol
36
36
37
37
* Policy-based (scheduled) Azure NetApp Files backup is independent from [snapshot policy configuration](azure-netapp-files-manage-snapshots.md).
38
38
39
+
* You can't apply a backup policy to a volume while a manual backup is in progress. Wait for the manual backup to complete before applying the policy.
40
+
39
41
* In a [cross-region replication](cross-region-replication-introduction.md) (CRR) or [cross-zone replication](cross-zone-replication-introduction.md) (CZR) setting, Azure NetApp Files backup can be configured on a source volume.
40
42
41
43
Backups on a destination volume are only supported for manually created snapshots. To take backups of a destination volume, create a snapshot on the source volume then wait for the snapshot to be replicated to the destination volume. From the destination volume, you select the snapshot for backup. Scheduled backups on a destination volume aren't supported.
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/maxfiles-concept.md
+7-6Lines changed: 7 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,14 +5,12 @@ services: azure-netapp-files
5
5
author: b-hchen
6
6
ms.service: azure-netapp-files
7
7
ms.topic: conceptual
8
-
ms.date: 08/09/2024
8
+
ms.date: 01/27/2025
9
9
ms.author: anfdocs
10
10
---
11
11
# Understand `maxfiles` limits in Azure NetApp Files
12
12
13
-
Azure NetApp Files volumes have a value called `maxfiles` that refers to the maximum number of files and folders (also known as inodes) a volume can contain. When the `maxfiles` limit is reached, clients receive "out of space" messages when attempting to create new files or folders. If you experience this issue, contact Microsoft technical support.
14
-
15
-
The `maxfiles` limit for an Azure NetApp Files volume is based on the size (quota) of the volume, where the service dynamically adjusts the `maxfiles` limit for a volume based on its provisioned size and uses the following guidelines.
13
+
Azure NetApp Files volumes have a value called `maxfiles` that refers to the maximum number of files and folders (also known as inodes) a volume can contain. The `maxfiles` limit for an Azure NetApp Files volume is based on the size (quota) of the volume, where the service dynamically adjusts the `maxfiles` limit for a volume based on its provisioned size and uses the following guidelines.
16
14
17
15
- For regular volumes less than or equal to 683 GiB, the default `maxfiles` limit is 21,251,126.
18
16
- For regular volumes greater than 683 GiB, the default `maxfiles` limit is approximately one file (or inode) per 32 KiB of allocated volume capacity up to a maximum of 2,147,483,632.
@@ -32,7 +30,7 @@ The following table shows examples of the relationship `maxfiles` values based o
32
30
| 50 TiB (53,687,091,200 KiB) | 1,593,835,519 |
33
31
| 100 TiB (107,374,182,400 KiB) | 2,147,483,632 |
34
32
35
-
The following table shows examples of the relationship `maxfiles` values based on volume sizes for large volumes.
33
+
The following table shows examples of the relationship `maxfiles` values based on volume sizes for [large volumes](large-volumes-requirements-considerations.md).
36
34
37
35
| Volume size | Estimated `maxfiles` limit |
38
36
| - | - |
@@ -45,9 +43,12 @@ To see the `maxfiles` allocation for a specific volume size, check the **Maximum
45
43
46
44
:::image type="content" source="./media/azure-netapp-files-resource-limits/maximum-number-files.png" alt-text="Screenshot of volume overview menu." lightbox="./media/azure-netapp-files-resource-limits/maximum-number-files.png":::
47
45
46
+
When the `maxfiles` limit is reached, clients receive "out of space" messages when attempting to create new files or folders. Adjusting your quota based on this information can create greater inode availability. If you have further issues with the `maxfiles` limit, contact Microsoft technical support.
47
+
48
+
48
49
You can't set `maxfiles` limits for data protection volumes via a quota request. Azure NetApp Files automatically increases the `maxfiles` limit of a data protection volume to accommodate the number of files replicated to the volume. When a failover happens on a data protection volume, the `maxfiles` limit remains the last value before the failover. In this situation, you can submit a `maxfiles`[quota request](azure-netapp-files-resource-limits.md#request-limit-increase) for the volume.
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/performance-benchmarks-linux.md
+16-21Lines changed: 16 additions & 21 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ author: b-hchen
6
6
ms.service: azure-netapp-files
7
7
ms.custom: linux-related-content
8
8
ms.topic: conceptual
9
-
ms.date: 11/08/2024
9
+
ms.date: 01/27/2025
10
10
ms.author: anfdocs
11
11
---
12
12
# Azure NetApp Files regular volume performance benchmarks for Linux
@@ -100,6 +100,7 @@ As the read-write I/OP mix increases towards write-heavy, the total I/OPS decrea
100
100
101
101
:::image type="content" source="./media/performance-benchmarks-linux/8K-random-iops-no-cache.png" alt-text="Diagram of benchmark tests with 8 KiB, random, client caching excluded." lightbox="./media/performance-benchmarks-linux/8K-random-iops-no-cache.png":::
102
102
103
+
103
104
## Side-by-side comparisons
104
105
105
106
To illustrate how caching can influence the performance benchmark tests, the following graph shows total I/OPS for 4-KiB tests with and without caching mechanisms in place. As shown, caching provides a slight performance boost for I/OPS fairly consistent trending.
@@ -142,44 +143,38 @@ In the graph below, testing shows that an Azure NetApp Files regular volume can
142
143
143
144
:::image type="content" source="./media/performance-benchmarks-linux/64K-sequential-read-write.png" alt-text="Diagram of 64-KiB benchmark tests with sequential I/O and caching included." lightbox="./media/performance-benchmarks-linux/64K-sequential-read-write.png":::
### Results: 64 KiB sequential I/O, reads vs. write, baseline without caching
146
147
147
-
In this benchmark, FIO ran using looping logic that less aggressively populated the cache. Client caching didn't influence the results. This configuration results in slightly better write performance numbers, but lower read numbers than tests without caching.
148
+
In this baseline benchmark, testing demonstrates that an Azure NetApp Files regular volume can handle between approximately 3,600 MiB/s pure sequential 64-KiB reads and approximately 2,400 MiB/second pure sequential 64-KiB writes. During the tests, a 50/50 mix showed total throughput on par with a pure sequential read workload.
148
149
149
-
In the following graph, testing demonstrates that an Azure NetApp Files regular volume can handle between approximately 3,600MiB/s pure sequential 64-KiB reads and approximately 2,400MiB/s pure sequential 64-KiB writes. During the tests, a 50/50 mix showed total throughput on par with a pure sequential read workload.
150
+
With respect to pure read, the 64-KiB baseline performed slightly better than the 256-KiB baseline. When it comes to pure write and all mixed read/write workloads, however, the 256-KiB baseline outperformed 64 KiB, indicating a larger block size of 256 KiB is more effective overall for high throughput workloads.
150
151
151
-
The read-write mix for the workload was adjusted by 25% for each run.
152
+
The read-write mix for the workload was adjusted by 25% for each run.
152
153
153
154
:::image type="content" source="./media/performance-benchmarks-linux/64K-sequential-read-write-no-cache.png" alt-text="Diagram of 64-KiB benchmark tests with sequential I/O, caching excluded." lightbox="./media/performance-benchmarks-linux/64K-sequential-read-write-no-cache.png":::
In this benchmark, FIO ran using looping logic that less aggressively populated the cache, so caching didn't influence the results. This configuration results in slightly less write performance numbers than 64-KiB tests, but higher read numbers than the same 64-KiB tests run without caching.
156
+
### Results: 256 KiB sequential I/O without caching
158
157
159
-
In the graph below, testing shows that an Azure NetApp Files regular volume can handle between approximately 3,500MiB/s pure sequential 256-KiB reads and approximately 2,500MiB/s pure sequential 256-KiB writes. During the tests, a 50/50 mix showed total throughput peaked higher than a pure sequential read workload.
158
+
In the following two baseline benchmarks, FIO was used to measure the amount of sequential I/O (read and write) a single regular volume in Azure NetApp Files can deliver. In order to produce a baseline that reflects the true bandwidth that a fully uncached read workload can achieve, FIO was configured to run with the parameter `randrepeat=0` for data set generation. Each test iteration was offset by reading a completely separate large dataset not part of the benchmark in order to clear any caching that might have occurred with the benchmark dataset.
160
159
161
-
The read-write mix for the workload was adjusted in 25% increments for each run.
160
+
In this graph, testing shows that an Azure NetApp Files regular volume can handle between approximately 3,500 MiB/s pure sequential 256-KiB reads and approximately 2,500 MiB/s pure sequential 256-KiB writes. During the tests, a 50/50 mix showed total throughput peaked higher than a pure sequential read workload.
162
161
163
162
:::image type="content" source="./media/performance-benchmarks-linux/256K-sequential-no-cache.png" alt-text="Diagram of 256-KiB sequential benchmark tests." lightbox="./media/performance-benchmarks-linux/256K-sequential-no-cache.png":::
164
163
165
-
### Side-by-side comparison
166
-
167
-
To better show how caching can influence the performance benchmark tests, the following graph shows total MiB/s for 64-KiB tests with and without caching mechanisms in place. Caching provides an initial slight performance boost for total MiB/s because caching generally improves reads more so than writes. As the read/write mix changes, the total throughput without caching exceeds the results that utilize client caching.
The following tests show a high I/OP benchmark using a single client with 64-KiB random workloads and a 1-TiB dataset. The workload mix generated uses a different I/O depth each time. To boost the performance for a single client workload, the `nconnect` mount option was leveraged for better parallelism in comparison to client mounts that didn't use the `nconnect` mount option. These tests were run only with caching excluded.
175
167
176
-
### Results: 64 KiB, sequential, caching excluded, with and without `nconnect`
To demonstrate how caching influences performance results, FIO was used in the following micro benchmark comparison to measure the amount of sequential I/O (read and write) a single regular volume in Azure NetApp Files can deliver. This test is contrasted with the benefits a partially cacheable workload may provide.
177
171
178
-
The following results show a scale-up test’s results when reading and writing in 4-KiB chunks on a NFSv3 mount on a single client with and without parallelization of operations (`nconnect`). The graphs show that as the I/O depth grows, the I/OPS also increase. But when using a standard TCP connection that provides only a single path to the storage, fewer total operations are sent per second than when a mount is able to leverage more TCP connections per mount point. In addition, the total latency for the operations is generally lower when using `nconnect`.
172
+
In the result without caching, testing was designed to mitigate any caching taking place as described in the baseline benchmarks above.
173
+
In the other result, FIO was used against Azure NetApp Files regular volumes without the `randrepeat=0` parameter and using a looping test iteration logic that slowly populated the cache over time. The combination of these factors produced an indeterminate amount of caching, boosting the overall throughput. This configuration resulted in slightly better overall read performance numbers than tests run without caching.
179
174
180
-
:::image type="content" source="./media/performance-benchmarks-linux/64K-sequential-no-cache-no-nconnect.png" alt-text="Diagram comparing 64-KiB tests without nconnect or caching." lightbox="./media/performance-benchmarks-linux/64K-sequential-no-cache-no-nconnect.png":::
175
+
The test results displayed in the graph display the side-by-side comparison of read performance with and without the caching influence, where caching produced up to ~4500 MiB/second read throughput, while no caching achieved around ~3600 MiB/second.
181
176
182
-
:::image type="content" source="./media/performance-benchmarks-linux/64K-sequential-no-cache-nconnect.png" alt-text="Diagram of 64-KiB tests with nconnect but no caching." lightbox="./media/performance-benchmarks-linux/64K-sequential-no-cache-nconnect.png":::
177
+
:::image type="content" source="./media/performance-benchmarks-linux/64K-sequential-read.png" alt-text="Diagram of comparing 64-KiB sequential reads throughputs based on caching." lightbox="./media/performance-benchmarks-linux/64K-sequential-read.png":::
183
178
184
179
### Side-by-side comparison (with and without `nconnect`)
0 commit comments