Skip to content

Commit c5a1350

Browse files
authored
Merge pull request #195051 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to main to sync with https://github.com/MicrosoftDocs/azure-docs (branch main)
2 parents 17f6e57 + 619bec4 commit c5a1350

File tree

9 files changed

+15
-15
lines changed

9 files changed

+15
-15
lines changed

articles/app-service/resources-kudu.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ It also provides other features, such as:
3737
- Generates [custom deployment scripts](https://github.com/projectkudu/kudu/wiki/Custom-Deployment-Script).
3838
- Allows access with [REST API](https://github.com/projectkudu/kudu/wiki/REST-API).
3939

40-
## RBAC permissions required to access Kudo
40+
## RBAC permissions required to access Kudu
4141
To access Kudu in the browser with Azure Active Directory authentication, you need to be a member of a built-in or custom role.
4242

4343
- If using a built-in role, you must be a member of Website Contributor, Contributor, or Owner.

articles/azure-arc/data/toc.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -169,7 +169,7 @@ items:
169169
href: using-extensions-in-postgresql-hyperscale-server-group.md
170170
- name: Migrate
171171
items:
172-
- name: Migrate PostgreSQL data into a Postgres Hyperscale server group
172+
- name: Migrate PostgreSQL data into a PostgreSQL Hyperscale server group
173173
href: migrate-postgresql-data-into-postgresql-hyperscale-server-group.md
174174
- name: Import the sample database AdventureWorks
175175
href: restore-adventureworks-sample-db-into-postgresql-hyperscale-server-group.md

articles/azure-resource-manager/bicep/deploy-github-actions.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ ms.custom: github-actions-azure
1212

1313
[GitHub Actions](https://docs.github.com/en/actions) is a suite of features in GitHub to automate your software development workflows.
1414

15-
In this quickstart, you use the [GitHub Action for Azure Resource Manager deployment](https://github.com/marketplace/actions/deploy-azure-resource-manager-arm-template) to automate deploying a Bicep file to Azure.
15+
In this quickstart, you use the [GitHub Actions for Azure Resource Manager deployment](https://github.com/marketplace/actions/deploy-azure-resource-manager-arm-template) to automate deploying a Bicep file to Azure.
1616

1717
It provides a short introduction to GitHub actions and Bicep files. If you want more detailed steps on setting up the GitHub actions and project, see [Learning path: Deploy Azure resources by using Bicep and GitHub Actions](/learn/paths/bicep-github-actions).
1818

@@ -32,7 +32,7 @@ az group create -n exampleRG -l westus
3232

3333
## Generate deployment credentials
3434

35-
Your GitHub action runs under an identity. Use the [az ad sp create-for-rbac](/cli/azure/ad/sp#az-ad-sp-create-for-rbac) command to create a [service principal](../../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) for the identity.
35+
Your GitHub Actions runs under an identity. Use the [az ad sp create-for-rbac](/cli/azure/ad/sp#az-ad-sp-create-for-rbac) command to create a [service principal](../../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) for the identity.
3636

3737
Replace the placeholder `myApp` with the name of your application. Replace `{subscription-id}` with your subscription ID.
3838

articles/cognitive-services/language-service/question-answering/concepts/azure-resources.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ When you move into the development phase of the project, you should consider:
3030
Typically there are three parameters you need to consider:
3131

3232
* **The throughput you need**:
33-
* Question answering is a free feature, and the throughput is currently capped at 10 transactions per second for both management APIs and prediction APIs.
33+
* The throughput for question answering is currently capped at 10 transactions per second for both management APIs and prediction APIs.
3434
* This should also influence your Azure **Cognitive Search** SKU selection, see more details [here](../../../../search/search-sku-tier.md). Additionally, you may need to adjust Cognitive Search [capacity](../../../../search/search-capacity-planning.md) with replicas.
3535

3636
* **Size and the number of knowledge bases**: Choose the appropriate [Azure search SKU](https://azure.microsoft.com/pricing/details/search/) for your scenario. Typically, you decide the number of knowledge bases you need based on number of different subject domains. One subject domain (for a single language) should be in one knowledge base.
@@ -42,7 +42,7 @@ Typically there are three parameters you need to consider:
4242
4343
For example, if your tier has 15 allowed indexes, you can publish 14 knowledge bases of the same language (one index per published knowledge base). The 15th index is used for all the knowledge bases for authoring and testing. If you choose to have knowledge bases in different languages, then you can only publish seven knowledge bases.
4444

45-
* **Number of documents as sources**: question answering is a free feature, and there are no limits to the number of documents you can add as sources.
45+
* **Number of documents as sources**: There are no limits to the number of documents you can add as sources in question answering.
4646

4747
The following table gives you some high-level guidelines.
4848

@@ -54,7 +54,7 @@ The following table gives you some high-level guidelines.
5454

5555
## Recommended settings
5656

57-
Custom question answering is a free feature, and the throughput is currently capped at 10 transactions per second for both management APIs and prediction APIs. To target 10 transactions per second for your service, we recommend the S1 (one instance) SKU of Azure Cognitive Search.
57+
The throughput for question answering is currently capped at 10 transactions per second for both management APIs and prediction APIs. To target 10 transactions per second for your service, we recommend the S1 (one instance) SKU of Azure Cognitive Search.
5858

5959
## Keys in question answering
6060

articles/cosmos-db/sql/how-to-time-to-live.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -199,7 +199,7 @@ async function createcontainerWithTTL(db: Database, containerDefinition: Contain
199199

200200
In addition to setting a default time to live on a container, you can set a time to live for an item. Setting time to live at the item level will override the default TTL of the item in that container.
201201

202-
* To set the TTL on an item, you need to provide a non-zero positive number, which indicates the period, in seconds, to expire the item after the last modified timestamp of the item `_ts`.
202+
* To set the TTL on an item, you need to provide a non-zero positive number, which indicates the period, in seconds, to expire the item after the last modified timestamp of the item `_ts`. You can provide a `-1` as well when the item should not expire.
203203

204204
* If the item doesn't have a TTL field, then by default, the TTL set to the container will apply to the item.
205205

@@ -563,4 +563,4 @@ container = database.createContainerIfNotExists(containerProperties, 400).block(
563563

564564
Learn more about time to live in the following article:
565565

566-
* [Time to live](time-to-live.md)
566+
* [Time to live](time-to-live.md)

articles/iot-hub/iot-hub-mqtt-support.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@ In order to ensure a client/IoT Hub connection stays alive, both the service and
7070
|Language |Default keep-alive interval |Configurable |
7171
|---------|---------|---------|
7272
|Node.js | 180 seconds | No |
73-
|Java | 230 seconds | No |
73+
|Java | 230 seconds | [Yes](https://github.com/Azure/azure-iot-sdk-java/blob/main/device/iot-device-client/src/main/java/com/microsoft/azure/sdk/iot/device/ClientOptions.java#L64) |
7474
|C | 240 seconds | [Yes](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/Iothub_sdk_options.md#mqtt-transport) |
7575
|C# | 300 seconds | [Yes](https://github.com/Azure/azure-iot-sdk-csharp/blob/main/iothub/device/src/Transport/Mqtt/MqttTransportSettings.cs#L89) |
7676
|Python | 60 seconds | No |

articles/sentinel/authentication-normalization-schema.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -62,12 +62,12 @@ The following filtering parameters are available:
6262

6363
| Name | Type | Description |
6464
|----------|-----------|-------------|
65-
| **starttime** | datetime | Filter only DNS queries that ran at or after this time. |
66-
| **endtime** | datetime | Filter only DNS queries that finished running at or before this time. |
65+
| **starttime** | datetime | Filter only authentication events that ran at or after this time. |
66+
| **endtime** | datetime | Filter only authentication events that finished running at or before this time. |
6767
| **targetusername_has** | string | Filter only authentication events that has any of the listed user names. |
6868

6969

70-
For example, to filter only DNS queries from the last day to a specific user, use:
70+
For example, to filter only authentication events from the last day to a specific user, use:
7171

7272
```kql
7373
imAuthentication (targetusername_has = 'johndoe', starttime = ago(1d), endtime=now())

articles/spring-cloud/how-to-config-server.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -174,7 +174,7 @@ Now that your configuration files are saved in a repository, you need to connect
174174

175175
> [!CAUTION]
176176
> Some Git repository servers use a *personal-token* or an *access-token*, such as a password, for **Basic Authentication**. You can use that kind of token as a password in Azure Spring Cloud, because it will never expire. But for other Git repository servers, such as Bitbucket and Azure DevOps Server, the *access-token* expires in one or two hours. This means that the option isn't viable when you use those repository servers with Azure Spring Cloud.
177-
> GitHub has removed support for password authentication, so you'll need to use a personal access token instead of password authentication for Github. For more information, see [Token authentication](https://github.blog/2020-12-15-token-authentication-requirements-for-git-operations/).
177+
> GitHub has removed support for password authentication, so you'll need to use a personal access token instead of password authentication for GitHub. For more information, see [Token authentication](https://github.blog/2020-12-15-token-authentication-requirements-for-git-operations/).
178178
179179
* **SSH**: In the **Default repository** section, in the **Uri** box, paste the repository URI, and then select the **Authentication** ("pencil" icon) button. In the **Edit Authentication** pane, in the **Authentication type** drop-down list, select **SSH**, and then enter your **Private key**. Optionally, specify your **Host key** and **Host key algorithm**. Be sure to include your public key in your Config Server repository. Select **OK**, and then select **Apply** to finish setting up your Config Server instance.
180180

articles/synapse-analytics/whats-new-archive.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ Improvements to the Synapse Machine Learning library v0.9.5 (previously called M
4141

4242
### Synapse SQL
4343

44-
* COPY schema discovery for complex data ingestion. To learn more, see the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_12) or [how Github leveraged this functionality in Introducing Automatic Schema Discovery with auto table creation for complex datatypes](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/introducing-automatic-schema-discovery-with-auto-table-creation/ba-p/3068927).
44+
* COPY schema discovery for complex data ingestion. To learn more, see the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_12) or [how GitHub leveraged this functionality in Introducing Automatic Schema Discovery with auto table creation for complex datatypes](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/introducing-automatic-schema-discovery-with-auto-table-creation/ba-p/3068927).
4545

4646
* Serverless SQL pools now support the HASHBYTES function. HASHBYTES is a T-SQL function which hashes values. Learn how to use [hash values in distributing data using this article](/sql/t-sql/functions/hashbytes-transact-sql) or the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_13).
4747

0 commit comments

Comments
 (0)