Skip to content

Commit 4913147

Browse files
authored
Merge pull request #102582 from changeworld/patch-3
Fix typo
2 parents feb293a + 82be05d commit 4913147

File tree

6 files changed

+7
-7
lines changed

6 files changed

+7
-7
lines changed

articles/purview/create-microsoft-purview-python.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ ms.custom: mode-api
1212

1313
# Quickstart: Create a Microsoft Purview (formerly Azure Purview) account using Python
1414

15-
In this quickstart, you’ll create a Microsoft Purview (formerly Azure Purview) account programatically using Python. [The python reference for Microsoft Purview](/python/api/azure-mgmt-purview/) is available, but this article will take you through all the steps needed to create an account with Python.
15+
In this quickstart, you’ll create a Microsoft Purview (formerly Azure Purview) account programatically using Python. [The Python reference for Microsoft Purview](/python/api/azure-mgmt-purview/) is available, but this article will take you through all the steps needed to create an account with Python.
1616

1717
The Microsoft Purview governance portal surfaces tools like the Microsoft Purview Data Map and Microsoft Purview Data Catalog that help you manage and govern your data landscape. By connecting to data across your on-premises, multi-cloud, and software-as-a-service (SaaS) sources, the Microsoft Purview Data Map creates an up-to-date map of your information. It identifies and classifies sensitive data, and provides end-to-end linage. Data consumers are able to discover data across your organization, and data administrators are able to audit, secure, and ensure right use of your data.
1818

articles/purview/tutorial-using-python-sdk.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -116,7 +116,7 @@ Microsoft Purview now has the required reading right to scan your Blob Storage.
116116

117117
## Create Python script file
118118

119-
Create a plain text file, and save it as a python script with the suffix .py.
119+
Create a plain text file, and save it as a Python script with the suffix .py.
120120
For example: tutorial.py.
121121

122122
## Instantiate a Scanning, Catalog, and Administration client
@@ -194,7 +194,7 @@ In this section, you'll register your Blob Storage.
194194
195195
1. Gather the resource ID for your storage account by following this guide: [get the resource ID for a storage account.](../storage/common/storage-account-get-info.md#get-the-resource-id-for-a-storage-account)
196196
197-
1. Then, in your python file, define the following information to be able to register the Blob storage programmatically:
197+
1. Then, in your Python file, define the following information to be able to register the Blob storage programmatically:
198198
199199
```python
200200
storage_name = "<name of your Storage Account>"

articles/synapse-analytics/quickstart-transform-data-using-spark-job-definition.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,7 @@ On this panel, you can reference to the Spark job definition to run.
8686
|Main class name| The fully qualified identifier or the main class that is in the main definition file. <br> Sample: `WordCount`|
8787
|Command-line arguments| You can add command-line arguments by clicking the **New** button. It should be noted that adding command-line arguments will override the command-line arguments defined by the Spark job definition. <br> *Sample: `abfss://…/path/to/shakespeare.txt` `abfss://…/path/to/result`* <br> |
8888
|Apache Spark pool| You can select Apache Spark pool from the list.|
89-
|Python code reference| Additional python code files used for reference in the main definition file. <br> It supports passing files (.py, .py3, .zip) to the "pyFiles" property. It will override the "pyFiles" property defined in Spark job definition. <br>|
89+
|Python code reference| Additional Python code files used for reference in the main definition file. <br> It supports passing files (.py, .py3, .zip) to the "pyFiles" property. It will override the "pyFiles" property defined in Spark job definition. <br>|
9090
|Reference files | Additional files used for reference in the main definition file. |
9191
|Dynamically allocate executors| This setting maps to the dynamic allocation property in Spark configuration for Spark Application executors allocation.|
9292
|Min executors| Min number of executors to be allocated in the specified Spark pool for the job.|

articles/synapse-analytics/spark/apache-spark-development-using-notebooks.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -450,7 +450,7 @@ You can access data in the primary storage account directly. There's no need to
450450

451451
## IPython Widgets
452452

453-
Widgets are eventful python objects that have a representation in the browser, often as a control like a slider, textbox etc. IPython Widgets only works in Python environment, it's not supported in other languages (e.g. Scala, SQL, C#) yet.
453+
Widgets are eventful Python objects that have a representation in the browser, often as a control like a slider, textbox etc. IPython Widgets only works in Python environment, it's not supported in other languages (e.g. Scala, SQL, C#) yet.
454454

455455
### To use IPython Widget
456456
1. You need to import `ipywidgets` module first to use the Jupyter Widget framework.

articles/synapse-analytics/spark/data-sources/apache-spark-cdm-connector.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ When reading CSV data, the connector uses the Spark FAILFAST option by default.
6262
.option("entity", "permissive") or .option("mode", "failfast")
6363
```
6464

65-
For example, [here's an example python sample.](https://github.com/Azure/spark-cdm-connector/blob/master/samples/SparkCDMsamplePython.ipynb)
65+
For example, [here's an example Python sample.](https://github.com/Azure/spark-cdm-connector/blob/master/samples/SparkCDMsamplePython.ipynb)
6666

6767
## Writing data
6868

articles/virtual-machines/workloads/oracle/oracle-database-backup-azure-backup.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1083,7 +1083,7 @@ Now the database has been restored you must recover the database. Please follow
10831083
10841084
1. Unmount the restore point.
10851085
1086-
When all databases on the VM have been successfully recovered you may unmount the restore point. This can be done on the VM using the `unmount` command or in Azure portal from the File Recovery blade. You can also unmount the recovery volumes by running the python script again with the **-clean** option.
1086+
When all databases on the VM have been successfully recovered you may unmount the restore point. This can be done on the VM using the `unmount` command or in Azure portal from the File Recovery blade. You can also unmount the recovery volumes by running the Python script again with the **-clean** option.
10871087
10881088
In the VM using unmount:
10891089
```bash

0 commit comments

Comments
 (0)