Skip to content

Commit 4ebcf0c

Browse files
authored
Merge pull request #267694 from MicrosoftDocs/main
2/29/2024 AM Publish
2 parents 4aac40e + 6b72007 commit 4ebcf0c

File tree

50 files changed

+375
-274
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

50 files changed

+375
-274
lines changed

articles/ai-services/speech-service/role-based-access-control.md

Lines changed: 8 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -15,17 +15,20 @@ ms.author: eur
1515
You can manage access and permissions to your Speech resources with Azure role-based access control (Azure RBAC). Assigned roles can vary across Speech resources. For example, you can assign a role to a Speech resource that should only be used to train a custom speech model. You can assign another role to a Speech resource that is used to transcribe audio files. Depending on who can access each Speech resource, you can effectively set a different level of access per application or user. For more information on Azure RBAC, see the [Azure RBAC documentation](../../role-based-access-control/overview.md).
1616

1717
> [!NOTE]
18-
> A Speech resource can inherit or be assigned multiple roles. The final level of access to this resource is a combination of all roles permissions from the operation level.
18+
> A Speech resource can inherit or be assigned multiple roles. The final level of access to the resource is a combination of all role permissions.
1919
2020
## Roles for Speech resources
2121

22-
A role definition is a collection of permissions. When you create a Speech resource, the built-in roles in this table are assigned by default.
22+
A role definition is a collection of permissions. When you create a Speech resource, the built-in roles in the following table are available for assignment.
23+
24+
> [!WARNING]
25+
> Speech service architecture differs from other Azure AI services in the way it uses [Azure control plane and data plane](../../azure-resource-manager/management/control-plane-and-data-plane.md). Speech service is extensively using data plane comparing to other Azure AI services, and this requires different set up for the roles. Because of this some general Cognitive Services roles have actual access right set that doesn't exactly match their name when used in Speech services scenario. For instance *Cognitive Services User* provides in effect the Contributor rights, while *Cognitive Services Contributor* provides no access at all. The same is true for generic *Owner* and *Contributor* roles which have no data plane rights and consequently provide no access to Speech resource. To keep consistency we recommend to use roles containing *Speech* in their names. These roles are *Cognitive Services Speech User* and *Cognitive Services Speech Contributor*. Their access right sets were designed specifically for the Speech service. In case you would like to use general Cognitive Services roles and Azure generic roles, we ask you to very carefully study the following access right table.
2326
2427
| Role | Can list resource keys | Access to data, models, and endpoints in custom projects| Access to speech transcription and synthesis APIs
2528
| ---| ---| ---| ---|
26-
|**Owner** |Yes |View, create, edit, and delete |Yes |
27-
|**Contributor** |Yes |View, create, edit, and delete |Yes |
28-
|**Cognitive Services Contributor** |Yes |View, create, edit, and delete |Yes |
29+
|**Owner** |Yes |None |No |
30+
|**Contributor** |Yes |None |No |
31+
|**Cognitive Services Contributor** |Yes |None |No |
2932
|**Cognitive Services User** |Yes |View, create, edit, and delete |Yes |
3033
|**Cognitive Services Speech Contributor** |No | View, create, edit, and delete |Yes |
3134
|**Cognitive Services Speech User** |No |View only |Yes |

articles/ai-studio/how-to/flow-deploy.md

Lines changed: 17 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ ms.topic: how-to
1010
ms.date: 2/24/2024
1111
ms.reviewer: eur
1212
ms.author: eur
13-
author: eric-urban
13+
author: likebupt
1414
---
1515

1616
# Deploy a flow for real-time inference
@@ -254,7 +254,7 @@ You can also directly go to the **Deployments** page from the left navigation, s
254254

255255
## Test the endpoint
256256

257-
In the endpoint detail page, switch to the **Test** tab.
257+
In the deployment detail page, switch to the **Test** tab.
258258

259259
For endpoints deployed from standard flow, you can input values in form editor or JSON editor to test the endpoint.
260260

@@ -266,7 +266,21 @@ The `chat_input` was set during development of the chat flow. You can input the
266266

267267
## Consume the endpoint
268268

269-
In the endpoint detail page, switch to the **Consume** tab. You can find the REST endpoint and key/token to consume your endpoint. There's also sample code for you to consume the endpoint in different languages.
269+
In the deployment detail page, switch to the **Consume** tab. You can find the REST endpoint and key/token to consume your endpoint. There's also sample code for you to consume the endpoint in different languages.
270+
271+
:::image type="content" source="../media/prompt-flow/how-to-deploy-for-real-time-inference/consume-sample-code.png" alt-text="Screenshot of sample code of consuming endpoints." lightbox = "../media/prompt-flow/how-to-deploy-for-real-time-inference/consume-sample-code.png":::
272+
273+
You need to input values for `RequestBody` or `data` and `api_key`. For example, if your flow has 2 inputs `location` and `url`, then you need to specify data as following.
274+
275+
```json
276+
{
277+
"location": "LA",
278+
"url": "<the_url_to_be_classified>"
279+
}
280+
```
281+
282+
283+
270284

271285

272286
## Clean up resources
114 KB
Loading

articles/azure-monitor/cost-usage.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,7 @@ Several other features don't have a direct cost, but you instead pay for the ing
2727
| Alerts | Alerts are charged based on the type and number of [signals](alerts/alerts-overview.md) used by the alert rule, its frequency, and the type of [notification](alerts/action-groups.md) used in response. For [log search alerts](alerts/alerts-types.md#log-alerts) configured for [at scale monitoring](alerts/alerts-types.md#monitor-the-same-condition-on-multiple-resources-using-splitting-by-dimensions-1), the cost will also depend on the number of time series created by the dimensions resulting from your query. |
2828
| Web tests | There is a cost for [standard web tests](app/availability-standard-tests.md) and [multi-step web tests](app/availability-multistep.md) in Application Insights. Multi-step web tests have been deprecated.
2929

30+
A list of Azure Monitor billing meter names is available [here](cost-meters.md).
3031

3132
### Data transfer charges
3233
Sending data to Azure Monitor can incur data bandwidth charges. As described in the [Azure Bandwidth pricing page](https://azure.microsoft.com/pricing/details/bandwidth/), data transfer between Azure services located in two regions charged as outbound data transfer at the normal rate. Inbound data transfer is free. Data transfer charges for Azure Monitor though are typically very small compared to the costs for data ingestion and retention. You should focus more on your ingested data volume to control your costs.
@@ -52,7 +53,7 @@ To get started analyzing your Azure Monitor charges, open [Cost Management + Bil
5253

5354
:::image type="content" source="media/usage-estimated-costs/010.png" lightbox="media/usage-estimated-costs/010.png" alt-text="Screenshot that shows Azure Cost Management with cost information.":::
5455

55-
To limit the view to Azure Monitor charges, [create a filter](../cost-management-billing/costs/group-filter.md) for the following **Service names**. See [Azure Monitor billing meter names](cost-meters.md) for the different charges that are included in each service.
56+
To limit the view to Azure Monitor charges, [create a filter](../cost-management-billing/costs/group-filter.md) for the following **Service names**. See [Azure Monitor billing meter names](cost-meters.md) for the different billing meters that are included in each service.
5657

5758
- Azure Monitor
5859
- Log Analytics

articles/azure-monitor/logs/cost-logs.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,8 @@ The default pricing for Log Analytics is a pay-as-you-go model that's based on i
2222
- The number and type of monitored resources.
2323
- The types of data collected from each monitored resource.
2424

25+
A list of Azure Monitor billing meter names is available [here](../cost-meters.md).
26+
2527
## Data size calculation
2628

2729
Data volume is measured as the size of the data sent to be stored and is measured in units of GB (10^9 bytes). The data size of a single record is calculated from a string representation of the columns that are stored in the Log Analytics workspace for that record. It doesn't matter whether the data is sent from an agent or added during the ingestion process. This calculation includes any custom columns added by the [logs ingestion API](logs-ingestion-api-overview.md), [transformations](../essentials/data-collection-transformations.md) or [custom fields](custom-fields.md) that are added as data is collected and then stored in the workspace.
@@ -164,6 +166,8 @@ Subscriptions that contained a Log Analytics workspace or Application Insights r
164166

165167
Access to the legacy Free Trial pricing tier was limited on July 1, 2022. Pricing information for the Standalone and Per Node pricing tiers is available [here](https://aka.ms/OMSpricing).
166168

169+
A list of Azure Monitor billing meter names, including these legacy tiers, is available [here](../cost-meters.md).
170+
167171
> [!IMPORTANT]
168172
> The legacy pricing tiers do not support access to some of the newest features in Log Analytics such as ingesting data as cost-effective Basic Logs.
169173

articles/azure-vmware/remove-arc-enabled-azure-vmware-solution-vsphere-resources-from-azure.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -108,6 +108,6 @@ During onboarding, to create a connection between your VMware vCenter and Azure,
108108

109109
As a last step, run the following command:
110110

111-
`az rest --method delete --url` [URL](https://management.azure.com/subscriptions/%3csubscrption-id%3e/resourcegroups/%3cresource-group-name%3e/providers/Microsoft.AVS/privateClouds/%3cprivate-cloud-name%3e/addons/arc?api-version=2022-05-01%22)
111+
`az rest --method delete --url` [URL](https://management.azure.com/subscriptions/%3Csubscrption-id%3E/resourcegroups/%3Cresource-group-name%3E/providers/Microsoft.AVS/privateClouds/%3Cprivate-cloud-name%3E/addons/arc?api-version=2022-05-01%22)
112112

113113
Once that step is done, Arc no longer works on the Azure VMware Solution private cloud. When you delete Arc resources from vCenter Server, it doesn't affect the Azure VMware Solution private cloud for the customer.

articles/communication-services/whats-new.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ Azure Communication Services Email Simple Mail Transfer Protocol (SMTP) as a Ser
5454
### Azure AI-powered Azure Communication Services Call Automation API Actions
5555
:::image type="content" source="./media/whats-new-images/11-23/advanced-call-automation-actions.png" alt-text="A graphic showing a server interacting with the cloud":::
5656

57-
Azure AI-powered Call Automation API actions are now generally available for developers who want to create enhanced calling workflows using Azure AI Speech-to-Text, Text-to-Speech and other language understanding engines. These actions allow developers to play dynamic audio prompts and recognize voice input from callers, enabling natural conversational experiences and more efficient task handling. Developers can use these actions with any of the four major SDKs - .NET, Java, JavaScript and Python - and integrate them with their Azure Open AI solutions to create virtual assistants that go beyond simple IVRs. You can learn more about this release and its capabilities from the Microsoft Ignite 2023 announcements blog and on-demand session.
57+
Azure AI-powered Call Automation API actions are now generally available for developers who want to create enhanced calling workflows using Azure AI Speech-to-Text, Text-to-Speech and other language understanding engines. These actions allow developers to play dynamic audio prompts and recognize voice input from callers, enabling natural conversational experiences and more efficient task handling. Developers can use these actions with any of the four major SDKs - .NET, Java, JavaScript and Python - and integrate them with their Azure OpenAI solutions to create virtual assistants that go beyond simple IVRs. You can learn more about this release and its capabilities from the Microsoft Ignite 2023 announcements blog and on-demand session.
5858

5959
[Read more in the Ignite Blog post.](https://techcommunity.microsoft.com/t5/azure-communication-services/ignite-2023-creating-value-with-intelligent-application/ba-p/3907629)
6060

articles/cosmos-db/cassandra/spark-databricks.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ This article details how to work with Azure Cosmos DB for Apache Cassandra from
4444

4545
* **Cassandra Spark connector:** - To integrate Azure Cosmos DB for Apache Cassandra with Spark, the Cassandra connector should be attached to the Azure Databricks cluster. To attach the cluster:
4646

47-
* Review the Databricks runtime version, the Spark version. Then find the [maven coordinates](https://mvnrepository.com/artifact/com.datastax.spark/spark-cassandra-connector-assembly) that are compatible with the Cassandra Spark connector, and attach it to the cluster. See ["Upload a Maven package or Spark package"](https://docs.databricks.com/libraries) article to attach the connector library to the cluster. We recommend selecting Databricks runtime version 10.4 LTS, which supports Spark 3.2.1. To add the Apache Spark Cassandra Connector, your cluster, select **Libraries** > **Install New** > **Maven**, and then add `com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.2.0` in Maven coordinates. If using Spark 2.x, we recommend an environment with Spark version 2.4.5, using spark connector at maven coordinates `com.datastax.spark:spark-cassandra-connector_2.11:2.4.3`.
47+
* Review the Databricks runtime version, the Spark version. Then find the [maven coordinates](https://mvnrepository.com/artifact/com.datastax.spark/spark-cassandra-connector-assembly) that are compatible with the Cassandra Spark connector, and attach it to the cluster. See ["Upload a Maven package or Spark package"](/azure/databricks/libraries/) article to attach the connector library to the cluster. We recommend selecting Databricks runtime version 10.4 LTS, which supports Spark 3.2.1. To add the Apache Spark Cassandra Connector, your cluster, select **Libraries** > **Install New** > **Maven**, and then add `com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.2.0` in Maven coordinates. If using Spark 2.x, we recommend an environment with Spark version 2.4.5, using spark connector at maven coordinates `com.datastax.spark:spark-cassandra-connector_2.11:2.4.3`.
4848

4949
* **Azure Cosmos DB for Apache Cassandra-specific library:** - If you're using Spark 2.x, a custom connection factory is required to configure the retry policy from the Cassandra Spark connector to Azure Cosmos DB for Apache Cassandra. Add the `com.microsoft.azure.cosmosdb:azure-cosmos-cassandra-spark-helper:1.2.0`[maven coordinates](https://search.maven.org/artifact/com.microsoft.azure.cosmosdb/azure-cosmos-cassandra-spark-helper/1.2.0/jar) to attach the library to the cluster.
5050

articles/cosmos-db/priority-based-execution.md

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -49,11 +49,7 @@ To get started using priority-based execution, navigate to the **Features** page
4949
#### [.NET SDK v3](#tab/net-v3)
5050

5151
```csharp
52-
using Microsoft.Azure.Cosmos.PartitionKey;
53-
using Microsoft.Azure.Cosmos.PriorityLevel;
54-
55-
Using Mircosoft.Azure.Cosmos.PartitionKey;
56-
Using Mircosoft.Azure.Cosmos.PriorityLevel;
52+
using Microsoft.Azure.Cosmos;
5753

5854
//update products catalog with low priority
5955
RequestOptions catalogRequestOptions = new ItemRequestOptions{PriorityLevel = PriorityLevel.Low};

0 commit comments

Comments
 (0)