Skip to content

Commit f539689

Browse files
authored
Merge pull request #106335 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to master to sync with https://github.com/Microsoft/azure-docs (branch master)
2 parents bda985d + f1b584c commit f539689

File tree

9 files changed

+25
-8
lines changed

9 files changed

+25
-8
lines changed

articles/active-directory-b2c/custom-policy-configure-user-input.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -259,7 +259,7 @@ The following elements are used to define the claim:
259259
1. Sign in to the [Azure portal](https://portal.azure.com).
260260
2. Make sure you're using the directory that contains your Azure AD tenant by selecting the **Directory + subscription** filter in the top menu and choosing the directory that contains your Azure AD tenant.
261261
3. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **App registrations**.
262-
4. Select **Identity Experience Framework (Preview)**.
262+
4. Select **Identity Experience Framework**.
263263
5. Select **Upload Custom Policy**, and then upload the two policy files that you changed.
264264
2. Select the sign-up or sign-in policy that you uploaded, and click the **Run now** button.
265265
3. You should be able to sign up using an email address.

articles/azure-monitor/learn/tutorial-resource-logs.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ A Log Analytics workspace in Azure Monitor collects and indexes log data from a
4141
- **Subscription**: Select the subscription to store the workspace. This does not need to be the same subscription same as the resource being monitored.
4242
- **Resource Group**: Select an existing resource group or click **Create new** to create a new one. This does not need to be the same resource group same as the resource being monitored.
4343
- **Location**: Select an Azure region or create a new one. This does not need to be the same location same as the resource being monitored.
44-
- **Pricing tier**: Select *Free* which will retain 7 days of data. You can change this pricing tier later. Click the **Log Analytics pricing** link to learn more about different pricing tiers.
44+
- **Pricing tier**: Select *Pay-as-you-go* as the pricing tier. You can change this pricing tier later. Click the **Log Analytics pricing** link to learn more about different pricing tiers.
4545

4646
![New workspace](media/tutorial-resource-logs/new-workspace.png)
4747

articles/azure-monitor/platform/manage-access.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -84,6 +84,7 @@ if ($_.Properties.features.enableLogAccessUsingOnlyResourcePermissions -eq $null
8484
else
8585
{ $_.Properties.features.enableLogAccessUsingOnlyResourcePermissions = $true }
8686
Set-AzResource -ResourceId $_.ResourceId -Properties $_.Properties -Force
87+
}
8788
```
8889

8990
### Using a Resource Manager template

articles/cosmos-db/sql-api-nodejs-get-started.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -433,7 +433,7 @@ Azure Cosmos DB supports rich queries against JSON documents stored in each cont
433433
]
434434
};
435435

436-
const { resources } = await client.database(databaseId).container(containerId).items.query(querySpec, {enableCrossPartitionQuery:true}).fetchAll();
436+
const { resources } = await client.database(databaseId).container(containerId).items.query(querySpec).fetchAll();
437437
for (var queryResult of resources) {
438438
let resultString = JSON.stringify(queryResult);
439439
console.log(`\tQuery returned ${resultString}\n`);

articles/hdinsight/kafka/apache-kafka-producer-consumer-api.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -214,5 +214,5 @@ To remove the resource group using the Azure portal:
214214

215215
In this document, you learned how to use the Apache Kafka Producer and Consumer API with Kafka on HDInsight. Use the following to learn more about working with Kafka:
216216

217-
> [!div class="nextstepaction"]
218-
> [Analyze Apache Kafka logs](apache-kafka-log-analytics-operations-management.md)
217+
* [Use Kafka REST Proxy](rest-proxy.md)
218+
* [Analyze Apache Kafka logs](apache-kafka-log-analytics-operations-management.md)

articles/iot-edge/iot-edge-certs.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ Because manufacturing and operation processes are separated, consider the follow
6464

6565
* With any certificate-based process, the root CA certificate and all intermediate CA certificates should be secured and monitored during the entire process of rolling out an IoT Edge device. The IoT Edge device manufacturer should have strong processes in place for proper storage and usage of their intermediate certificates. In addition, the device CA certificate should be kept in as secure storage as possible on the device itself, preferably a hardware security module.
6666

67-
* The IoT Edge hub server certificate is presented by IoT Edge hub to the connecting client devices and modules. The common name (CN) of the device CA certificate **must not be** the same as the "hostname" that will be used in config.yaml on the IoT Edge device. The name used by clients to connect to IoT Edge (for example, via the GatewayHostName parameter of the connection string or the CONNECT command in MQTT) **can't be** the same as common name used in the device CA certificate. This restriction is because the IoT Edge hub presents its entire certificate chain for verification by clients. If the IoT Edge hub server certificate and the device CA certificate both have the same CN, you get in a verification loop and the certificate invalidates.
67+
* The IoT Edge hub server certificate is presented by IoT Edge hub to the connecting client devices and modules. The common name (CN) of the device CA certificate **must not be** the same as the "hostname" that will be used in config.yaml on the IoT Edge device. The name used by clients to connect to IoT Edge (for example, via the GatewayHostName parameter of the connection string or the CONNECT command in MQTT) **can't be** the same as the common name used in the device CA certificate. This restriction is because the IoT Edge hub presents its entire certificate chain for verification by clients. If the IoT Edge hub server certificate and the device CA certificate both have the same CN, you get in a verification loop and the certificate invalidates.
6868

6969
* Because the device CA certificate is used by the IoT Edge security daemon to generate the final IoT Edge certificates, it must itself be a signing certificate, meaning it has certificate signing capabilities. Applying "V3 Basic constraints CA:True" to the device CA certificate automatically sets up the required key usage properties.
7070

articles/machine-learning/data-science-virtual-machine/linux-dsvm-walkthrough.md

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -184,6 +184,8 @@ To deploy the decision tree code from the preceding section, sign in to Azure Ma
184184
![The Azure Machine Learning Studio (classic) primary authorization token](./media/linux-dsvm-walkthrough/workspace-token.png)
185185
1. Load the **AzureML** package, and then set values of the variables with your token and workspace ID in your R session on the DSVM:
186186

187+
if(!require("devtools")) install.packages("devtools")
188+
devtools::install_github("RevolutionAnalytics/AzureML")
187189
if(!require("AzureML")) install.packages("AzureML")
188190
require(AzureML)
189191
wsAuth = "<authorization-token>"
@@ -203,9 +205,23 @@ To deploy the decision tree code from the preceding section, sign in to Azure Ma
203205
return(colnames(predictDF)[apply(predictDF, 1, which.max)])
204206
}
205207

208+
1. Create a settings.json file for this workspace:
209+
210+
vim ~/.azureml/settings.json
211+
212+
1. Make sure the following contents are put inside settings.json:
213+
214+
{"workspace":{
215+
"id": "<workspace-id>",
216+
"authorization_token": "<authorization-token>",
217+
"api_endpoint": "https://studioapi.azureml.net",
218+
"management_endpoint": "https://management.azureml.net"
219+
}
220+
206221

207222
1. Publish the **predictSpam** function to AzureML by using the **publishWebService** function:
208223

224+
ws <- workspace()
209225
spamWebService <- publishWebService(ws, fun = predictSpam, name="spamWebService", inputSchema = smallTrainSet, data.frame=TRUE)
210226

211227
1. This function takes the **predictSpam** function, creates a web service named **spamWebService** that has defined inputs and outputs, and then returns information about the new endpoint.

articles/machine-learning/how-to-track-experiments.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ The following metrics can be added to a run while training an experiment. To vie
3636
|Lists|Function:<br>`run.log_list(name, value, description='')`<br><br>Example:<br>run.log_list("accuracies", [0.6, 0.7, 0.87]) | Log a list of values to the run with the given name.|
3737
|Row|Function:<br>`run.log_row(name, description=None, **kwargs)`<br>Example:<br>run.log_row("Y over X", x=1, y=0.4) | Using *log_row* creates a metric with multiple columns as described in kwargs. Each named parameter generates a column with the value specified. *log_row* can be called once to log an arbitrary tuple, or multiple times in a loop to generate a complete table.|
3838
|Table|Function:<br>`run.log_table(name, value, description='')`<br><br>Example:<br>run.log_table("Y over X", {"x":[1, 2, 3], "y":[0.6, 0.7, 0.89]}) | Log a dictionary object to the run with the given name. |
39-
|Images|Function:<br>`run.log_image(name, path=None, plot=None)`<br><br>Example:<br>`run.log_image("ROC", plt)` | Log an image to the run record. Use log_image to log an image file or a matplotlib plot to the run. These images will be visible and comparable in the run record.|
39+
|Images|Function:<br>`run.log_image(name, path=None, plot=None)`<br><br>Example:<br>`run.log_image("ROC", plot=plt)` | Log an image to the run record. Use log_image to log an image file or a matplotlib plot to the run. These images will be visible and comparable in the run record.|
4040
|Tag a run|Function:<br>`run.tag(key, value=None)`<br><br>Example:<br>run.tag("selected", "yes") | Tag the run with a string key and optional string value.|
4141
|Upload file or directory|Function:<br>`run.upload_file(name, path_or_stream)`<br> <br> Example:<br>run.upload_file("best_model.pkl", "./model.pkl") | Upload a file to the run record. Runs automatically capture file in the specified output directory, which defaults to "./outputs" for most run types. Use upload_file only when additional files need to be uploaded or an output directory is not specified. We suggest adding `outputs` to the name so that it gets uploaded to the outputs directory. You can list all of the files that are associated with this run record by called `run.get_file_names()`|
4242

articles/storage/queues/storage-python-how-to-use-queue-storage.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ The [Azure Storage SDK for Python](https://github.com/azure/azure-storage-python
3838
To install via the Python Package Index (PyPI), type:
3939

4040
```bash
41-
pip install azure-storage-blob==2.1.0
41+
pip install azure-storage-queue==2.1.0
4242
```
4343

4444
> [!NOTE]

0 commit comments

Comments
 (0)