Skip to content

Commit ea545d3

Browse files
authored
Merge pull request #185065 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to master to sync with https://github.com/MicrosoftDocs/azure-docs (branch master)
2 parents e69336c + fe62785 commit ea545d3

File tree

7 files changed

+21
-14
lines changed

7 files changed

+21
-14
lines changed

articles/machine-learning/how-to-troubleshoot-deployment.md

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -113,6 +113,12 @@ After the image is successfully built, the system attempts to start a container
113113
114114
Use the info in the [Inspect the Docker log](how-to-troubleshoot-deployment-local.md#dockerlog) article.
115115
116+
## Container azureml-fe-aci launch fails
117+
118+
When deploying a service to an Azure Container Instance compute target, Azure Machine Learning attempts to create a front-end container that has the name `azureml-fe-aci` for the inference request. If `azureml-fe-aci` crashes, you can see logs by running `az container logs --name MyContainerGroup --resource-group MyResourceGroup --subscription MySubscription --container-name azureml-fe-aci`. You can follow the error message in the logs to make the fix.
119+
120+
The most common failure for `azureml-fe-aci` is that the provided SSL certificate or key is invalid.
121+
116122
## Function fails: get_model_path()
117123
118124
Often, in the `init()` function in the scoring script, [Model.get_model_path()](/python/api/azureml-core/azureml.core.model.model#get-model-path-model-name--version-none---workspace-none-) function is called to locate a model file or a folder of model files in the container. If the model file or folder cannot be found, the function fails. The easiest way to debug this error is to run the below Python code in the Container shell:
@@ -227,4 +233,4 @@ Learn more about deployment:
227233
228234
* [How to deploy and where](how-to-deploy-and-where.md)
229235
* [Tutorial: Train & deploy models](tutorial-train-deploy-notebook.md)
230-
* [How to run and debug experiments locally](./how-to-debug-visual-studio-code.md)
236+
* [How to run and debug experiments locally](./how-to-debug-visual-studio-code.md)

articles/machine-learning/tutorial-designer-automobile-price-train-score.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -100,7 +100,7 @@ There are several sample datasets included in the designer for you to experiment
100100

101101
You can visualize the data to understand the dataset that you'll use.
102102

103-
1. Right-click the **Automobile price data (Raw)** and select **Visualize** > **Dataset output**.
103+
1. Right-click the **Automobile price data (Raw)** and select **Preview Data**.
104104

105105
1. Select the different columns in the data window to view information about each one.
106106

@@ -281,7 +281,7 @@ Now that your pipeline is all setup, you can submit a pipeline run to train your
281281

282282
After the run completes, you can view the results of the pipeline run. First, look at the predictions generated by the regression model.
283283

284-
1. Right-click the **Score Model** component, and select **Visualize** > **Scored dataset** to view its output.
284+
1. Right-click the **Score Model** component, and select **Preview data** > **Scored dataset** to view its output.
285285

286286
Here you can see the predicted prices and the actual prices from the testing data.
287287

@@ -291,7 +291,7 @@ After the run completes, you can view the results of the pipeline run. First, lo
291291

292292
Use the **Evaluate Model** to see how well the trained model performed on the test dataset.
293293

294-
1. Right-click the **Evaluate Model** component and select **Visualize** > **Evaluation results** to view its output.
294+
1. Right-click the **Evaluate Model** component and select **Preview data** > **Evaluation results** to view its output.
295295

296296
The following statistics are shown for your model:
297297

articles/synapse-analytics/database-designer/concepts-database-templates.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ A foreign key is a column or a combination of columns whose values match a prima
4646

4747
## Composite key
4848

49-
A composite key is one that is composed of two or more columns that are together required to uniquely identify a table. For example, in an Order table, both OrderNumber and ProductId may be required to uniquely identify a record.
49+
A composite key is one that is composed of two or more columns that are together required to uniquely identify a record in a table. For example, in an Order table, both OrderNumber and ProductId may be required to uniquely identify a record.
5050

5151
## Relationships
5252

articles/synapse-analytics/get-started-analyze-spark.md

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -31,16 +31,19 @@ A serverless Spark pool is a way of indicating how a user wants to work with Spa
3131
## Analyze NYC Taxi data with a Spark pool
3232

3333
> [!NOTE]
34-
> Make sure you have [placed the sample data into the primary storage account](get-started-create-workspace.md#place-sample-data-into-the-primary-storage-account)
34+
> Make sure you have [placed the sample data in the primary storage account](get-started-create-workspace.md#place-sample-data-into-the-primary-storage-account).
35+
36+
1. In Synapse Studio, go to the **Develop** hub.
37+
1. Create a new notebook.
38+
1. Create a new code cell and paste the following code in that cell:
3539

36-
1. In Synapse Studio, go to the **Develop** hub
37-
2. Create a new Notebook
38-
3. Create a new code cell and paste the following code into that cell.
3940
```py
4041
%%pyspark
4142
df = spark.read.load('abfss://[email protected]/NYCTripSmall.parquet', format='parquet')
4243
display(df.limit(10))
4344
```
45+
46+
1. Modify the load URI, so it references the sample file in your storage account according to the [abfss URI scheme](../storage/blobs/data-lake-storage-introduction-abfs-uri.md).
4447
1. In the notebook, in the **Attach to** menu, choose the **Spark1** serverless Spark pool that we created earlier.
4548
1. Select **Run** on the cell. Synapse will start a new Spark session to run this cell if needed. If a new Spark session is needed, initially it will take about two seconds to be created.
4649
1. If you just want to see the schema of the dataframe run a cell with the following code:

articles/synapse-analytics/security/how-to-set-up-access-control.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -97,7 +97,7 @@ Identify the following information about your storage:
9797
| --- | --- |
9898
| Role | Storage Blob Data Contributor |
9999
| Assign access to |SERVICEPRINCIPAL |
100-
| Members |workspace1_SynapseAdmins, workspace1_SynapseContributors, and workspace1_SynapseComputeOperators|
100+
| Members |workspace1_SynapseAdministrators, workspace1_SynapseContributors, and workspace1_SynapseComputeOperators|
101101

102102
![Add role assignment page in Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
103103

articles/synapse-analytics/spark/apache-spark-secure-credentials-with-tokenlibrary.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -349,7 +349,7 @@ print(connection_string)
349349
```csharp
350350
using Microsoft.Spark.Extensions.Azure.Synapse.Analytics.Utils;
351351

352-
string connectionString = TokenLibrary.getSecret("<AZURE KEY VAULT NAME>", "<SECRET KEY>", "<LINKED SERVICE NAME>");
352+
string connectionString = TokenLibrary.GetSecret("<AZURE KEY VAULT NAME>", "<SECRET KEY>", "<LINKED SERVICE NAME>");
353353
Console.WriteLine(connectionString);
354354
```
355355

articles/synapse-analytics/sql-data-warehouse/what-is-a-data-warehouse-unit-dwu-cdwu.md

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ Increasing DWUs:
3737

3838
- Linearly changes performance of the system for scans, aggregations, and CTAS statements
3939
- Increases the number of readers and writers for PolyBase load operations
40-
- Increases the maximum number of concurrent queries and concurrency slots.
40+
- Increases the maximum number of concurrent queries and concurrency slots
4141

4242
## Service Level Objective
4343

@@ -67,8 +67,6 @@ Each performance tier uses a slightly different unit of measure for their data w
6767

6868
Both DWUs and cDWUs support scaling compute up or down, and pausing compute when you don't need to use the data warehouse. These operations are all on-demand. Gen2 uses a local disk-based cache on the compute nodes to improve performance. When you scale or pause the system, the cache is invalidated and so a period of cache warming is required before optimal performance is achieved.
6969

70-
Each SQL server (for example, myserver.database.windows.net) has a [Database Transaction Unit (DTU)](../../azure-sql/database/service-tiers-dtu.md) quota that allows a specific number of data warehouse units. For more information, see the [workload management capacity limits](sql-data-warehouse-service-capacity-limits.md#workload-management).
71-
7270
## Capacity limits
7371

7472
Each SQL server (for example, myserver.database.windows.net) has a [Database Transaction Unit (DTU)](../../azure-sql/database/service-tiers-dtu.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2ftoc.json) quota that allows a specific number of data warehouse units. For more information, see the [workload management capacity limits](sql-data-warehouse-service-capacity-limits.md#workload-management).

0 commit comments

Comments
 (0)