Skip to content

Commit d628121

Browse files
committed
Minor edits
1 parent 0b5ff34 commit d628121

File tree

4 files changed

+9
-9
lines changed

4 files changed

+9
-9
lines changed

articles/synapse-analytics/security/synapse-private-link-hubs.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ ms.reviewer: whhender
1414

1515
This article explains how you can securely connect to Azure Synapse Studio from your Azure virtual network using private links. Azure Synapse Analytics private link hubs are Azure resources that act as connectors between your secured network and the Synapse Studio web experience.
1616

17-
There are two steps to connect to Synapse Studio using private links:
17+
There are two steps for connecting to Synapse Studio using private links:
1818

1919
1. Create an Azure private link hubs resource.
2020
1. Create a private endpoint from your Azure virtual network to this private link hub.
@@ -27,9 +27,9 @@ You can use a single Azure Synapse private link hub resource to privately connec
2727

2828
Follow these steps to create an Azure private link hub:
2929

30-
1. Sign in to the Azure portal and search for *Synapse private link hubs*.
30+
1. Sign in to the Azure portal and enter *Synapse private link hubs* in the search field.
3131

32-
1. Select **Azure Synapse Analytics (private link hubs)** from the list under **Services**.
32+
1. Select **Azure Synapse Analytics (private link hubs)** from the results under **Services**.
3333

3434
For a detailed guide, follow the steps in [Connect to workspace resources from a restricted network](./how-to-connect-to-workspace-from-restricted-network.md). Certain URLs must be accessible from the client browser after enabling Azure Synapse private link hub.
3535

articles/synapse-analytics/spark/apache-spark-manage-pool-packages.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,7 @@ Spark pool libraries can be managed either from Synapse Studio or the Azure port
7777

7878
A system reserved Spark job is initiated each time a pool is updated with a new set of libraries. This Spark job helps monitor the status of the library installation. If the installation fails due to library conflicts or other issues, the Spark pool reverts to its previous or default state.
7979

80-
In addition, users can also inspect the installation logs to identify dependency conflicts or see which libraries were installed during the pool update.
80+
In addition, users can inspect the installation logs to identify dependency conflicts or see which libraries were installed during the pool update.
8181

8282
To view these logs:
8383

@@ -111,7 +111,7 @@ alabaster==0.7.10
111111

112112
### YML format
113113

114-
In addition, you can also provide an *environment.yml* file to update the pool environment. The packages listed in this file are downloaded from the default Conda channels, Conda-Forge, and PyPI. You can specify other channels or remove the default channels by using the configuration options.
114+
In addition, you can provide an *environment.yml* file to update the pool environment. The packages listed in this file are downloaded from the default Conda channels, Conda-Forge, and PyPI. You can specify other channels or remove the default channels by using the configuration options.
115115

116116
This example specifies the channels and Conda/PyPI dependencies.
117117

@@ -127,7 +127,7 @@ dependencies:
127127
- koalas==1.7.0
128128
```
129129

130-
For details on creating an environment from this *environment.yml* file, see [Creating an environment from an environment.yml file](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#activating-an-environment).
130+
For details on creating an environment from this *environment.yml* file, see [Activating an environment](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#activating-an-environment).
131131

132132
## Related content
133133

articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-manage-compute-overview.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ This article explains how to manage compute resources for dedicated SQL pool (fo
1919

2020
The architecture of dedicated SQL pool separates storage and compute, allowing each to scale independently. As a result, you can scale compute to meet performance demands independent of data storage. You can also pause and resume compute resources.
2121

22-
A natural consequence of this architecture is that [billing](https://azure.microsoft.com/pricing/details/synapse-analytics/) for compute and storage is separate. If you don't need to use your dedicated SQL pool for a while, you can save compute costs by pausing compute.
22+
A natural consequence of this architecture is that [pricing](https://azure.microsoft.com/pricing/details/synapse-analytics/) for compute and storage is separate. If you don't need to use your dedicated SQL pool for a while, you can save compute costs by pausing compute.
2323

2424
## Scaling compute
2525

@@ -111,7 +111,7 @@ When you pause or scale your dedicated SQL pool, behind the scenes your queries
111111

112112
Rolling back the work completed by a transactional query can take as long, or even longer, than the original change the query was applying. For example, if you cancel a query that was deleting rows and has already been running for an hour, it could take the system an hour to insert back the deleted rows. If you run pause or scaling while transactions are in flight, your pause or scaling might seem to take a long time because pausing and scaling has to wait for the rollback to complete before it can proceed.
113113

114-
For more information, see [Understanding transactions](sql-data-warehouse-develop-transactions.md) and [Optimizing transactions](sql-data-warehouse-develop-best-practices-transactions.md).
114+
For more information, see [Use transactions](sql-data-warehouse-develop-transactions.md) and [Optimizing transactions](sql-data-warehouse-develop-best-practices-transactions.md).
115115

116116
## Automate compute management
117117

articles/synapse-analytics/toc.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -764,7 +764,7 @@ items:
764764
items:
765765
- name: Manage Workspace packages
766766
href: ./spark/apache-spark-manage-workspace-packages.md
767-
- name: Manage Spark pool libraries
767+
- name: Manage Spark pool packages
768768
href: ./spark/apache-spark-manage-pool-packages.md
769769
- name: Manage Notebook session packages
770770
href: ./spark/apache-spark-manage-session-packages.md

0 commit comments

Comments
 (0)