Skip to content

Commit a1abdcc

Browse files
authored
Merge pull request #89625 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to master to sync with https://github.com/Microsoft/azure-docs (branch master)
2 parents b6c4d59 + 52739aa commit a1abdcc

File tree

7 files changed

+33
-33
lines changed

7 files changed

+33
-33
lines changed

articles/active-directory/saas-apps/idc-tutorial.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
9191
1. Click **Set additional URLs** and perform the following steps if you wish to configure the application in **SP** initiated mode:
9292

9393
In the **Sign-on URL** text box, type a URL:
94-
`https://cas.idc.com/saml-welcome/AzureAppDirectory`
94+
`https://www.idc.com/saml-welcome/<SamlWelcomeCode>`
9595

9696
> [!NOTE]
9797
> These values are not real. Update these values with the actual Identifier and Reply URL. Contact [IDC Client support team](mailto:[email protected]) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
@@ -140,7 +140,7 @@ To configure single sign-on on **IDC** side, you need to send the downloaded **F
140140

141141
### Create IDC test user
142142

143-
In this section, you create a user called Britta Simon in IDC. Work with [IDC support team](mailto:[email protected]) to add the users in the IDC platform. Users must be created and activated before you use single sign-on.
143+
A user does not have to be created in IDC in advance. The user will created automatically once he uses single sign-on for the first time.
144144

145145
## Test SSO
146146

articles/api-management/api-management-access-restriction-policies.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,7 @@ The `rate-limit` policy prevents API usage spikes on a per subscription basis by
118118

119119
| Name | Description | Required |
120120
| --------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- |
121-
| set-limit | Root element. | Yes |
121+
| rate-limit | Root element. | Yes |
122122
| api | Add one or more of these elements to impose a call rate limit on APIs within the product. Product and API call rate limits are applied independently. API can be referenced either via `name` or `id`. If both attributes are provided, `id` will be used and `name` will be ignored. | No |
123123
| operation | Add one or more of these elements to impose a call rate limit on operations within an API. Product, API, and operation call rate limits are applied independently. Operation can be referenced either via `name` or `id`. If both attributes are provided, `id` will be used and `name` will be ignored. | No |
124124

@@ -181,9 +181,9 @@ In the following example, the rate limit is keyed by the caller IP address.
181181

182182
### Elements
183183

184-
| Name | Description | Required |
185-
| --------- | ------------- | -------- |
186-
| set-limit | Root element. | Yes |
184+
| Name | Description | Required |
185+
| ----------------- | ------------- | -------- |
186+
| rate-limit-by-key | Root element. | Yes |
187187

188188
### Attributes
189189

articles/batch/tutorial-rendering-cli.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -164,20 +164,20 @@ az storage container create \
164164
--name job-myrenderjob
165165
```
166166

167-
To write output files to the container, Batch needs to use a Shared Access Signature (SAS) token. Create the token with the [az storage account generate-sas](/cli/azure/storage/account#az-storage-account-generate-sas) command. This example creates a token to write to any blob container in the account, and the token expires on November 15, 2018:
167+
To write output files to the container, Batch needs to use a Shared Access Signature (SAS) token. Create the token with the [az storage account generate-sas](/cli/azure/storage/account#az-storage-account-generate-sas) command. This example creates a token to write to any blob container in the account, and the token expires on November 15, 2020:
168168

169169
```azurecli-interactive
170170
az storage account generate-sas \
171171
--permissions w \
172172
--resource-types co \
173173
--services b \
174-
--expiry 2019-11-15
174+
--expiry 2020-11-15
175175
```
176176

177177
Take note of the token returned by the command, which looks similar to the following. You use this token in a later step.
178178

179179
```
180-
se=2018-11-15&sp=rw&sv=2017-04-17&ss=b&srt=co&sig=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
180+
se=2020-11-15&sp=rw&sv=2019-09-24&ss=b&srt=co&sig=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
181181
```
182182

183183
## Render a single-frame scene
@@ -213,7 +213,7 @@ Modify the `blobSource` and `containerURL` elements in the JSON file so that the
213213
"commandLine": "cmd /c \"%3DSMAX_2018%3dsmaxcmdio.exe -secure off -v:5 -rfw:0 -start:1 -end:1 -outputName:\"dragon.jpg\" -w 400 -h 300 MotionBlur-DragonFlying.max\"",
214214
"resourceFiles": [
215215
{
216-
"blobSource": "https://mystorageaccount.blob.core.windows.net/scenefiles/MotionBlur-DragonFlying.max",
216+
"httpUrl": "https://mystorageaccount.blob.core.windows.net/scenefiles/MotionBlur-DragonFlying.max",
217217
"filePath": "MotionBlur-DragonFlying.max"
218218
}
219219
],

articles/cognitive-services/QnAMaker/Tutorials/integrate-qnamaker-luis.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ In the above scenario, QnA Maker first gets the intent of the incoming question
4646

4747
## Web app Bot
4848

49-
1. [Create a "Basic" Web App bot](https://docs.microsoft.com/azure/bot-service/bot-service-quickstart?view=azure-bot-service-4.0) which automatically includes a LUIS app. Select the 4.x SDK and the C# programming language.
49+
1. [Create a "Basic" Web App bot](https://docs.microsoft.com/azure/bot-service/bot-service-quickstart?view=azure-bot-service-4.0) which automatically includes a LUIS app. Select C# programming language.
5050

5151
1. Once the web app bot is created, in the Azure portal, select the web app bot.
5252
1. Select **Application Settings** in the Web app bot service navigation, then scroll down to **Application settings** section of available settings.

articles/data-factory/copy-activity-schema-and-type-mapping.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ ms.author: jingwang
1818
---
1919
# Schema mapping in copy activity
2020

21-
This article describes how Azure Data Factory copy activity does schema mapping and data type mapping from source data to sink data when execute the data copy.
21+
This article describes how the Azure Data Factory copy activity does schema mapping and data type mapping from source data to sink data when executing the data copy.
2222

2323
## Schema mapping
2424

@@ -89,7 +89,7 @@ The following properties are supported under `translator` -> `mappings` -> objec
8989
| -------- | ------------------------------------------------------------ | -------- |
9090
| name | Name of the source or sink column. | Yes |
9191
| ordinal | Column index. Start with 1. <br>Apply and required when using delimited text without header line. | No |
92-
| path | JSON path expression for each field to extract or map. Apply for hierarchical data e.g. MongoDB/REST.<br>For fields under root object, JSON path starts with root $; for fields inside the array chosen by `collectionReference` property, JSON path starts from the array element. | No |
92+
| path | JSON path expression for each field to extract or map. Apply for hierarchical data e.g. MongoDB/REST.<br>For fields under the root object, the JSON path starts with root $; for fields inside the array chosen by `collectionReference` property, JSON path starts from the array element. | No |
9393
| type | Data Factory interim data type of the source or sink column. | No |
9494
| culture | Culture of the source or sink column. <br>Apply when type is `Datetime` or `Datetimeoffset`. The default is `en-us`. | No |
9595
| format | Format string to be used when type is `Datetime` or `Datetimeoffset`. Refer to [Custom Date and Time Format Strings](https://docs.microsoft.com/dotnet/standard/base-types/custom-date-and-time-format-strings) on how to format datetime. | No |
@@ -102,14 +102,14 @@ The following properties are supported under `translator` -> `mappings` in addit
102102

103103
### Alternative column mapping
104104

105-
You can specify copy activity -> `translator` -> `columnMappings` to map between tabular-shaped data . In this case, "structure" section is required for both input and output datasets. Column mapping supports **mapping all or subset of columns in the source dataset "structure" to all columns in the sink dataset "structure"**. The following are error conditions that result in an exception:
105+
You can specify copy activity -> `translator` -> `columnMappings` to map between tabular-shaped data . In this case, the "structure" section is required for both input and output datasets. Column mapping supports **mapping all or subset of columns in the source dataset "structure" to all columns in the sink dataset "structure"**. The following are error conditions that result in an exception:
106106

107107
* Source data store query result does not have a column name that is specified in the input dataset "structure" section.
108108
* Sink data store (if with pre-defined schema) does not have a column name that is specified in the output dataset "structure" section.
109109
* Either fewer columns or more columns in the "structure" of sink dataset than specified in the mapping.
110110
* Duplicate mapping.
111111

112-
In the following example, the input dataset has a structure and it points to a table in an on-premises Oracle database.
112+
In the following example, the input dataset has a structure, and it points to a table in an on-premises Oracle database.
113113

114114
```json
115115
{

articles/data-factory/quickstart-create-data-factory-python.md

Lines changed: 17 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -153,7 +153,7 @@ You create linked services in a data factory to link your data stores and comput
153153
ls_name = 'storageLinkedService'
154154

155155
# IMPORTANT: specify the name and key of your Azure Storage account.
156-
storage_string = SecureString('DefaultEndpointsProtocol=https;AccountName=<storageaccountname>;AccountKey=<storageaccountkey>')
156+
storage_string = SecureString(value='DefaultEndpointsProtocol=https;AccountName=<storageaccountname>;AccountKey=<storageaccountkey>')
157157

158158
ls_azure_storage = AzureStorageLinkedService(connection_string=storage_string)
159159
ls = adf_client.linked_services.create_or_update(rg_name, df_name, ls_name, ls_azure_storage)
@@ -222,10 +222,7 @@ Add the following code to the **Main** method that **triggers a pipeline run**.
222222

223223
```python
224224
#Create a pipeline run.
225-
run_response = adf_client.pipelines.create_run(rg_name, df_name, p_name,
226-
{
227-
}
228-
)
225+
run_response = adf_client.pipelines.create_run(rg_name, df_name, p_name, parameters={})
229226
```
230227

231228
## Monitor a pipeline run
@@ -237,8 +234,12 @@ To monitor the pipeline run, add the following code the **Main** method:
237234
time.sleep(30)
238235
pipeline_run = adf_client.pipeline_runs.get(rg_name, df_name, run_response.run_id)
239236
print("\n\tPipeline run status: {}".format(pipeline_run.status))
240-
activity_runs_paged = list(adf_client.activity_runs.list_by_pipeline_run(rg_name, df_name, pipeline_run.run_id, datetime.now() - timedelta(1), datetime.now() + timedelta(1)))
241-
print_activity_run_details(activity_runs_paged[0])
237+
filter_params = RunFilterParameters(
238+
last_updated_after=datetime.now() - timedelta(1), last_updated_before=datetime.now() + timedelta(1))
239+
query_response = adf_client.activity_runs.query_by_pipeline_run(
240+
rg_name, df_name, pipeline_run.run_id, filter_params)
241+
print_activity_run_details(query_response.value[0])
242+
242243
```
243244

244245
Now, add the following statement to invoke the **main** method when the program is run:
@@ -334,7 +335,7 @@ def main():
334335

335336
# Specify the name and key of your Azure Storage account
336337
storage_string = SecureString(
337-
'DefaultEndpointsProtocol=https;AccountName=<storage account name>;AccountKey=<storage account key>')
338+
value='DefaultEndpointsProtocol=https;AccountName=<storage account name>;AccountKey=<storage account key>')
338339

339340
ls_azure_storage = AzureStorageLinkedService(
340341
connection_string=storage_string)
@@ -348,15 +349,15 @@ def main():
348349
blob_path = 'adfv2tutorial/input'
349350
blob_filename = 'input.txt'
350351
ds_azure_blob = AzureBlobDataset(
351-
ds_ls, folder_path=blob_path, file_name=blob_filename)
352+
linked_service_name=ds_ls, folder_path=blob_path, file_name=blob_filename)
352353
ds = adf_client.datasets.create_or_update(
353354
rg_name, df_name, ds_name, ds_azure_blob)
354355
print_item(ds)
355356

356357
# Create an Azure blob dataset (output)
357358
dsOut_name = 'ds_out'
358359
output_blobpath = 'adfv2tutorial/output'
359-
dsOut_azure_blob = AzureBlobDataset(ds_ls, folder_path=output_blobpath)
360+
dsOut_azure_blob = AzureBlobDataset(linked_service_name=ds_ls, folder_path=output_blobpath)
360361
dsOut = adf_client.datasets.create_or_update(
361362
rg_name, df_name, dsOut_name, dsOut_azure_blob)
362363
print_item(dsOut)
@@ -379,19 +380,18 @@ def main():
379380
print_item(p)
380381

381382
# Create a pipeline run
382-
run_response = adf_client.pipelines.create_run(rg_name, df_name, p_name,
383-
{
384-
}
385-
)
383+
run_response = adf_client.pipelines.create_run(rg_name, df_name, p_name, parameters={})
386384

387385
# Monitor the pipeline run
388386
time.sleep(30)
389387
pipeline_run = adf_client.pipeline_runs.get(
390388
rg_name, df_name, run_response.run_id)
391389
print("\n\tPipeline run status: {}".format(pipeline_run.status))
392-
activity_runs_paged = list(adf_client.activity_runs.list_by_pipeline_run(
393-
rg_name, df_name, pipeline_run.run_id, datetime.now() - timedelta(1), datetime.now() + timedelta(1)))
394-
print_activity_run_details(activity_runs_paged[0])
390+
filter_params = RunFilterParameters(
391+
last_updated_after=datetime.now() - timedelta(1), last_updated_before=datetime.now() + timedelta(1))
392+
query_response = adf_client.activity_runs.query_by_pipeline_run(
393+
rg_name, df_name, pipeline_run.run_id, filter_params)
394+
print_activity_run_details(query_response.value[0])
395395

396396

397397
# Start the main method

articles/sql-database/sql-database-managed-instance-determine-size-vnet-subnet.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ When you create a Managed Instance, Azure allocates a number of virtual machines
2323
By design, a Managed Instance needs a minimum of 16 IP addresses in a subnet and may use up to 256 IP addresses. As a result, you can use a subnet masks between /28 and /24 when defining your subnet IP ranges. A network mask bit of /28 (14 hosts per network) is a good size for a single general purpose or business-critical deployment. A mask bit of /27 (30 hosts per network) is ideal for a multiple Managed Instance deployments within the same VNet. Mask bit settings of /26 (62 hosts) and /24 (254 hosts) allows further scaling out of the VNet to support additional Managed Instances.
2424

2525
> [!IMPORTANT]
26-
> A subnet size with 16 IP addresses is the bare minimum with limited potential for the further Managed Instance scale out. Choosing subnet with the prefix /27 or below is highly recommended.
26+
> A subnet size with 16 IP addresses is the bare minimum with limited potential where a scaling operation like vCore size change is not supported. Choosing subnet with the prefix /27 or longest prefix is highly recommended.
2727
2828
## Determine subnet size
2929

0 commit comments

Comments
 (0)