Skip to content

Commit 0f3cc73

Browse files
authored
Merge pull request #264200 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to main to sync with https://github.com/MicrosoftDocs/azure-docs (branch main)
2 parents d713cbb + 44f06de commit 0f3cc73

File tree

4 files changed

+89
-9
lines changed

4 files changed

+89
-9
lines changed

articles/azure-netapp-files/tools-reference.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ ms.author: anfdocs
1212

1313
# Azure NetApp Files tools
1414

15-
Azure NetApp Files offers [multiple tools](https://azure.github.io/azure-netapp-files/) to estimate costs, understand features and availability, and monitor your Azure NetApp Files deployment.
15+
Azure NetApp Files offers [multiple tools](https://aka.ms/anftools) to estimate costs, understand features and availability, and monitor your Azure NetApp Files deployment.
1616

1717
* [**Azure NetApp Files Performance Calculator**](https://aka.ms/anfcalc)
1818

articles/mysql/single-server/how-to-restore-server-portal.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -48,9 +48,9 @@ In the screenshot below it has been increased to 34 days.
4848
The backup retention period governs how far back in time a point-in-time restore can be retrieved, since it's based on backups available. Point-in-time restore is described further in the following section.
4949

5050
## Point-in-time restore
51-
Azure Database for MySQL allows you to restore the server back to a point-in-time and into to a new copy of the server. You can use this new server to recover your data, or have your client applications point to this new server.
51+
Azure Database for MySQL allows you to restore the server back to a point-in-time and into a new copy of the server. You can use this new server to recover your data, or have your client applications point to this new server.
5252

53-
For example, if a table was accidentally dropped at noon today, you could restore to the time just before noon and retrieve the missing table and data from that new copy of the server. Point-in-time restore is at the server level, not at the database level.
53+
For example, if a table was accidentally dropped at noon today, you could restore it to the time just before noon and retrieve the missing table and data from that new copy of the server. Point-in-time restore is at the server level, not at the database level.
5454

5555
The following steps restore the sample server to a point-in-time:
5656
1. In the Azure portal, select your Azure Database for MySQL server.
@@ -64,8 +64,8 @@ The following steps restore the sample server to a point-in-time:
6464
:::image type="content" source="./media/how-to-restore-server-portal/3-restore.png" alt-text="Azure Database for MySQL - Restore information":::
6565
- **Restore point**: Select the point-in-time you want to restore to.
6666
- **Target server**: Provide a name for the new server.
67-
- **Location**: You cannot select the region. By default it is same as the source server.
68-
- **Pricing tier**: You cannot change these parameters when doing a point-in-time restore. It is same as the source server.
67+
- **Location**: You cannot select the region. By default, it is the same as the source server.
68+
- **Pricing tier**: You cannot change these parameters when doing a point-in-time restore. It is the same as the source server.
6969

7070
4. Click **OK** to restore the server to restore to a point-in-time.
7171

@@ -77,7 +77,7 @@ Additionally, after the restore operation finishes, there are two server paramet
7777
* time_zone - This value to set to DEFAULT value **SYSTEM**
7878
* event_scheduler - The event_scheduler is set to **OFF** on the restored server
7979

80-
You will need to copy over the value from teh primary server and set it on the restored server by reconfiguring the [server parameter](how-to-server-parameters.md)
80+
You will need to copy over the value from the primary server and set it on the restored server by reconfiguring the [server parameter](how-to-server-parameters.md)
8181

8282
The new server created during a restore does not have the VNet service endpoints that existed on the original server. These rules need to be set up separately for this new server. Firewall rules from the original server are restored.
8383

@@ -90,7 +90,7 @@ If you configured your server for geographically redundant backups, a new server
9090

9191
2. Provide the subscription, resource group, and name of the new server.
9292

93-
3. Select **Backup** as the **Data source**. This action loads a dropdown that provides a list of servers that have geo redundant backups enabled.
93+
3. Select **Backup** as the **Data source**. This action loads a dropdown that provides a list of servers that have geo-redundant backups enabled.
9494

9595
:::image type="content" source="./media/how-to-restore-server-portal/3-geo-restore.png" alt-text="Select data source.":::
9696

21.1 KB
Loading

articles/synapse-analytics/spark/microsoft-spark-utilities.md

Lines changed: 82 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
11
---
22
title: Introduction to Microsoft Spark utilities
33
description: "Tutorial: MSSparkutils in Azure Synapse Analytics notebooks"
4-
author: ruixinxu
4+
author: JeneZhang
55
ms.service: synapse-analytics
66
ms.topic: reference
77
ms.subservice: spark
88
ms.date: 09/10/2020
9-
ms.author: ruxu
9+
ms.author: jingzh
1010
zone_pivot_groups: programming-languages-spark-all-minus-sql
1111
ms.custom: subject-rbac-steps, devx-track-python
1212
---
@@ -390,6 +390,17 @@ mssparkutils.fs.cp('source file or directory', 'destination file or directory',
390390
```
391391
::: zone-end
392392

393+
### Performant copy file
394+
395+
This method provides a faster way of copying or moving files, especially large volumes of data.
396+
397+
```python
398+
mssparkutils.fs.fastcp('source file or directory', 'destination file or directory', True) # Set the third parameter as True to copy all files and directories recursively
399+
```
400+
401+
> [!NOTE]
402+
> The method only supports in Spark 3.3 and Spark 3.4.
403+
393404
### Preview file content
394405

395406
Returns up to the first 'maxBytes' bytes of the given file as a String encoded in UTF-8.
@@ -605,6 +616,75 @@ After the run finished, you will see a snapshot link named '**View notebook run:
605616

606617
![Screenshot of a snap link python](./media/microsoft-spark-utilities/spark-utilities-run-notebook-snap-link-sample-python.png)
607618

619+
### Reference run multiple notebooks in parallel
620+
621+
The method `mssparkutils.notebook.runMultiple()` allows you to run multiple notebooks in parallel or with a predefined topological structure. The API is using a multi-thread implementation mechanism within a spark session, which means the compute resources are shared by the reference notebook runs.
622+
623+
With `mssparkutils.notebook.runMultiple()`, you can:
624+
625+
- Execute multiple notebooks simultaneously, without waiting for each one to finish.
626+
627+
- Specify the dependencies and order of execution for your notebooks, using a simple JSON format.
628+
629+
- Optimize the use of Spark compute resources and reduce the cost of your Synapse projects.
630+
631+
- View the Snapshots of each notebook run record in the output, and debug/monitor your notebook tasks conveniently.
632+
633+
- Get the exit value of each executive activity and use them in downstream tasks.
634+
635+
You can also try to run the mssparkutils.notebook.help("runMultiple") to find the example and detailed usage.
636+
637+
Here's a simple example of running a list of notebooks in parallel using this method:
638+
639+
```python
640+
641+
mssparkutils.notebook.runMultiple(["NotebookSimple", "NotebookSimple2"])
642+
643+
```
644+
645+
The execution result from the root notebook is as follows:
646+
647+
:::image type="content" source="media\microsoft-spark-utilities\spark-utilities-run-notebook-list.png" alt-text="Screenshot of reference a list of notebooks." lightbox="media\microsoft-spark-utilities\spark-utilities-run-notebook-list.png":::
648+
649+
The following is an example of running notebooks with topological structure using `mssparkutils.notebook.runMultiple()`. Use this method to easily orchestrate notebooks through a code experience.
650+
651+
```python
652+
# run multiple notebooks with parameters
653+
DAG = {
654+
"activities": [
655+
{
656+
"name": "NotebookSimple", # activity name, must be unique
657+
"path": "NotebookSimple", # notebook path
658+
"timeoutPerCellInSeconds": 90, # max timeout for each cell, default to 90 seconds
659+
"args": {"p1": "changed value", "p2": 100}, # notebook parameters
660+
},
661+
{
662+
"name": "NotebookSimple2",
663+
"path": "NotebookSimple2",
664+
"timeoutPerCellInSeconds": 120,
665+
"args": {"p1": "changed value 2", "p2": 200}
666+
},
667+
{
668+
"name": "NotebookSimple2.2",
669+
"path": "NotebookSimple2",
670+
"timeoutPerCellInSeconds": 120,
671+
"args": {"p1": "changed value 3", "p2": 300},
672+
"retry": 1,
673+
"retryIntervalInSeconds": 10,
674+
"dependencies": ["NotebookSimple"] # list of activity names that this activity depends on
675+
}
676+
]
677+
}
678+
mssparkutils.notebook.runMultiple(DAG)
679+
680+
```
681+
682+
> [!NOTE]
683+
>
684+
> - The method only supports in Spark 3.3 and Spark 3.4.
685+
> - The parallelism degree of the multiple notebook run is restricted to the total available compute resource of a Spark session.
686+
687+
608688
### Exit a notebook
609689
Exits a notebook with a value. You can run nesting function calls in a notebook interactively or in a pipeline.
610690

0 commit comments

Comments
 (0)