You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-studio/how-to/deploy-models-timegen-1.md
+36Lines changed: 36 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -47,6 +47,42 @@ You can deploy TimeGEN-1 as a serverless API with pay-as-you-go billing. Nixtla
47
47
- An [Azure AI Studio project](../how-to/create-projects.md).
48
48
- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the resource group. For more information on permissions, visit [Role-based access control in Azure AI Studio](../concepts/rbac-ai-studio.md).
49
49
50
+
### Pricing information
51
+
52
+
#### Estimate the number of tokens needed
53
+
54
+
Before you create a deployment, it's useful to estimate the number of tokens that you plan to use and be billed for.
55
+
One token corresponds to one data point in your input dataset or output dataset.
56
+
57
+
Suppose you have the following input time series dataset:
To determine the number of tokens, multiply the number of rows (in this example, two) and the number of columns used for forecasting—not counting the unique_id and timestamp columns (in this example, three) to get a total of six tokens.
You can also determine the number of tokens by counting the number of data points returned after data forecasting. In this example, the number of tokens is two.
74
+
75
+
#### Estimate the pricing
76
+
77
+
There are four pricing meters, as described in the following table:
Copy file name to clipboardExpand all lines: articles/aks/create-node-pools.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -31,7 +31,7 @@ This article shows you how to create one or more node pools in an AKS cluster.
31
31
The following limitations apply when you create AKS clusters that support multiple node pools:
32
32
33
33
* See [Quotas, virtual machine size restrictions, and region availability in Azure Kubernetes Service (AKS)](quotas-skus-regions.md).
34
-
* You can delete system node pools if you have another system node pool to take its place in the AKS cluster. Otherwise, you cannot delete the system node pool.
34
+
* You can delete the system node pool if you have another system node pool to take its place in the AKS cluster. Otherwise, you cannot delete the system node pool.
35
35
* System pools must contain at least one node, and user node pools may contain zero or more nodes.
36
36
* The AKS cluster must use the Standard SKU load balancer to use multiple node pools. This feature isn't supported with Basic SKU load balancers.
37
37
* The AKS cluster must use Virtual Machine Scale Sets for the nodes.
@@ -60,7 +60,7 @@ The following limitations apply when you create AKS clusters that support multip
Copy file name to clipboardExpand all lines: articles/app-service/manage-custom-dns-buy-domain.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -71,7 +71,7 @@ For pricing information on App Service domains, visit the [App Service Pricing p
71
71
| Setting | Description |
72
72
| -------- | ----------- |
73
73
|**Auto renewal**| Your App Service domain is registered to you at one-year increments. Enable auto renewal so that your domain registration doesn't expire and that you retain ownership of the domain. Your Azure subscription is automatically charged the yearly domain registration fee at the time of renewal. If you leave it disabled, you must [renew it manually](#renew-the-domain). |
74
-
|**Privacy protection**| Enabled by default. Privacy protection hides your domain registration contact information from the WHOIS database. Privacy protection is already included in the yearly domain registration fee. To opt out, select **Disable**. |
74
+
|**Privacy protection**| Enabled by default. Privacy protection hides your domain registration contact information from the WHOIS database and is already included in the yearly domain registration fee. To opt out, select **Disable**. Privacy protection is not supported in following top-level domains (TLDs): co.uk, in, org.uk, co.in, and nl. |
75
75
76
76
1. Select **Next: Tags** and set the tags you want for your App Service domain. Tagging isn't required for using App Service domains, but is a [feature in Azure that helps you manage your resources](../azure-resource-manager/management/tag-resources.md).
Copy file name to clipboardExpand all lines: articles/azure-cache-for-redis/cache-best-practices-performance.md
+10-6Lines changed: 10 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ description: Learn how to test the performance of Azure Cache for Redis.
5
5
author: flang-msft
6
6
ms.service: cache
7
7
ms.topic: conceptual
8
-
ms.date: 06/19/2023
8
+
ms.date: 07/01/2024
9
9
ms.author: franlanglois
10
10
---
11
11
@@ -17,7 +17,7 @@ Fortunately, several tools exist to make benchmarking Redis easier. Two of the m
17
17
18
18
## How to use the redis-benchmark utility
19
19
20
-
1. Install open source Redis server to a client VM you can use for testing. The redis-benchmark utility is built into the open source Redis distribution. Follow the [Redis documentation](https://redis.io/docs/latest/operate/oss_and_stack/install/install-redis/) for instructions on how to install the open source image.
20
+
1. Install open source Redis server to a client virtual machines (VMs) you can use for testing. The redis-benchmark utility is built into the open source Redis distribution. Follow the [Redis documentation](https://redis.io/docs/latest/operate/oss_and_stack/install/install-redis/) for instructions on how to install the open source image.
21
21
22
22
1. The client VM used for testing should be _in the same region_ as your Azure Cache for Redis instance.
23
23
@@ -49,6 +49,10 @@ Fortunately, several tools exist to make benchmarking Redis easier. Two of the m
49
49
50
50
- On the Premium tier, scaling out, clustering, is typically recommended before scaling up. Clustering allows Redis server to use more vCPUs by sharding data. Throughput should increase roughly linearly when adding shards in this case.
51
51
52
+
- On _C0_ and _C1_ Standard caches, while internal Defender scanning is running on the VMs, you might see short spikes in server load not caused by an increase in cache requests. You see higher latency for requests while internal Defender scans are run on these tiers a couple of times a day. Caches on the _C0_ and _C1_ tiers only have a single core to multitask, dividing the work of serving internal Defender scanning and Redis requests. You can reduce the effect by scaling to a higher tier offering with multiple CPU cores, such as _C2_.
53
+
54
+
The increased cache size on the higher tiers helps address any latency concerns. Also, at the _C2_ level, you have support for as many as 2,000 client connections.
55
+
52
56
## Redis-benchmark examples
53
57
54
58
**Pre-test setup**:
@@ -88,7 +92,7 @@ redis-benchmark -h yourcache.region.redisenterprise.cache.azure.net -p 10000 -a
88
92
89
93
## Example performance benchmark data
90
94
91
-
The following tables show the maximum throughput values that were observed while testing various sizes of Standard, Premium, Enterprise, and Enterprise Flash caches. We used `redis-benchmark` from an IaaS Azure VM against the Azure Cache for Redis endpoint. The throughput numbers are only for GET commands. Typically, SET commands have a lower throughput. These numbers are optimized for throughput. Real-world throughput under acceptable latency conditions may be lower.
95
+
The following tables show the maximum throughput values that were observed while testing various sizes of Standard, Premium, Enterprise, and Enterprise Flash caches. We used `redis-benchmark` from an IaaS Azure VM against the Azure Cache for Redis endpoint. The throughput numbers are only for GET commands. Typically, SET commands have a lower throughput. These numbers are optimized for throughput. Real-world throughput under acceptable latency conditions might be lower.
92
96
93
97
The following configuration was used to benchmark throughput for the Basic, Standard, and Premium tiers:
94
98
@@ -144,7 +148,7 @@ redis-benchmark -h yourcache.region.redisenterprise.cache.azure.net -p 10000 -a
144
148
145
149
#### Enterprise Cluster Policy
146
150
147
-
| Instance | Size | vCPUs | Expected network bandwidth (Mbps)| GET requests per second without SSL (1-kB value size) | GET requests per second with SSL (1-kB value size) |
151
+
| Instance | Size | vCPUs | Expected network bandwidth (Mbps)|`GET` requests per second without SSL (1-kB value size) |`GET` requests per second with SSL (1-kB value size) |
148
152
|:---:| --- | ---:|---:| ---:| ---:|
149
153
| E10 | 12 GB | 4 | 4,000 | 300,000 | 207,000 |
150
154
| E20 | 25 GB | 4 | 4,000 | 680,000 | 480,000 |
@@ -156,7 +160,7 @@ redis-benchmark -h yourcache.region.redisenterprise.cache.azure.net -p 10000 -a
156
160
157
161
#### OSS Cluster Policy
158
162
159
-
| Instance | Size | vCPUs | Expected network bandwidth (Mbps)| GET requests per second without SSL (1-kB value size) | GET requests per second with SSL (1-kB value size) |
163
+
| Instance | Size | vCPUs | Expected network bandwidth (Mbps)|`GET` requests per second without SSL (1-kB value size) |`GET` requests per second with SSL (1-kB value size) |
@@ -170,7 +174,7 @@ redis-benchmark -h yourcache.region.redisenterprise.cache.azure.net -p 10000 -a
170
174
171
175
In addition to scaling up by moving to larger cache size, you can boost performance by [scaling out](cache-how-to-scale.md#how-to-scale-up-and-out---enterprise-and-enterprise-flash-tiers). In the Enterprise tiers, scaling out is called increasing the _capacity_ of the cache instance. A cache instance by default has capacity of two--meaning a primary and replica node. An Enterprise cache instance with a capacity of four indicates that the instance was scaled out by a factor of two. Scaling out provides access to more memory and vCPUs. Details on how many vCPUs are used by the core Redis process at each cache size and capacity can be found at the [Enterprise tiers best practices page](cache-best-practices-enterprise-tiers.md#sharding-and-cpu-utilization). Scaling out is most effective when using the OSS cluster policy.
172
176
173
-
The following tables show the GET requests per second at different capacities, using SSL and a 1-kB value size.
177
+
The following tables show the `GET` requests per second at different capacities, using SSL and a 1-kB value size.
0 commit comments