Skip to content

Commit 063ccdc

Browse files
committed
Acrolinx
1 parent dba9079 commit 063ccdc

File tree

5 files changed

+8
-8
lines changed

5 files changed

+8
-8
lines changed

articles/synapse-analytics/database-designer/quick-start-create-lake-database.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.reviewer: wiassaf, jovanpop
77
ms.service: azure-synapse-analytics
88
ms.subservice: database-editor
99
ms.topic: quickstart
10-
ms.date: 08/16/2022
10+
ms.date: 12/31/2024
1111
ms.custom: template-concept
1212
---
1313

@@ -23,9 +23,9 @@ This quick start gives you a complete sample scenario on how you can apply datab
2323

2424
## Create a lake database from database templates
2525

26-
Use the new database templates functionality to create a lake database that you can use to configure your data model for the database.
26+
Use the new database templates functionality to create a lake database that you can use to configure your data model for the database.
2727

28-
For our scenario we will use the `Retail` database template and select the following entities:
28+
For our scenario we'll use the `Retail` database template and select the following entities:
2929

3030
- **RetailProduct** - A product is anything that can be offered to a market that might satisfy a need by potential customers. That product is the sum of all physical, psychological, symbolic, and service attributes associated with it.
3131
- **Transaction** - The lowest level of executable work or customer activity.
@@ -43,7 +43,7 @@ The easiest way to find entities is by using the search box above the different
4343

4444
After you have created the database, make sure the storage account and the filepath is set to a location where you wish to store the data. The path will default to the primary storage account within Azure Synapse Analytics but can be changed to your needs.
4545

46-
:::image type="content" source="./media/quick-start-create-lake-database/lake-database-example.png" alt-text="Screenshot of an individual entity properties in the Retail database template." lightbox="./media/quick-start-create-lake-database/lake-database-example.png":::
46+
:::image type="content" source="./media/quick-start-create-lake-database/lake-database-example.png" alt-text="Screenshot of an individual entity property in the Retail database template." lightbox="./media/quick-start-create-lake-database/lake-database-example.png":::
4747

4848
To save your layout and make it available within Azure Synapse, **Publish** all changes. This step completes the setup of the lake database and makes it available to all components within Azure Synapse Analytics and outside.
4949

articles/synapse-analytics/migration-guides/netezza/7-beyond-data-warehouse-migration.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -83,7 +83,7 @@ You can use these features without writing any code, or you can add custom code
8383
:::image type="content" source="../media/7-beyond-data-warehouse-migration/azure-data-factory-pipeline.png" border="true" alt-text="Screenshot of an example of a Data Factory pipeline." lightbox="../media/7-beyond-data-warehouse-migration/azure-data-factory-pipeline-lrg.png":::
8484

8585
>[!TIP]
86-
>Data Factory lets you to build scalable data integration pipelines without code.
86+
>Data Factory lets you build scalable data integration pipelines without code.
8787
8888
Implement Data Factory pipeline development from any of several places, including:
8989

articles/synapse-analytics/migration-guides/oracle/7-beyond-data-warehouse-migration.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -83,7 +83,7 @@ You can use these features without writing any code, or you can add custom code
8383
:::image type="content" source="../media/7-beyond-data-warehouse-migration/azure-data-factory-pipeline.png" border="true" alt-text="Screenshot of an example of a Data Factory pipeline." lightbox="../media/7-beyond-data-warehouse-migration/azure-data-factory-pipeline-lrg.png":::
8484

8585
>[!TIP]
86-
>Data Factory lets you to build scalable data integration pipelines without code.
86+
>Data Factory lets you build scalable data integration pipelines without code.
8787
8888
Implement Data Factory pipeline development from any of several places, including:
8989

articles/synapse-analytics/migration-guides/teradata/7-beyond-data-warehouse-migration.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -83,7 +83,7 @@ You can use these features without writing any code, or you can add custom code
8383
:::image type="content" source="../media/7-beyond-data-warehouse-migration/azure-data-factory-pipeline.png" border="true" alt-text="Screenshot of an example of a Data Factory pipeline." lightbox="../media/7-beyond-data-warehouse-migration/azure-data-factory-pipeline-lrg.png":::
8484

8585
>[!TIP]
86-
>Data Factory lets you to build scalable data integration pipelines without code.
86+
>Data Factory lets you build scalable data integration pipelines without code.
8787
8888
Implement Data Factory pipeline development from any of several places, including:
8989

articles/synapse-analytics/spark/apache-spark-machine-learning-concept.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ There are several options when training machine learning models using Azure Spar
3434
Learn more about the machine learning capabilities by viewing the article on how to [train models in Azure Synapse Analytics](../spark/apache-spark-machine-learning-training.md).
3535

3636
### SparkML and MLlib
37-
Spark's in-memory distributed computation capabilities make it a good choice for the iterative algorithms used in machine learning and graph computations. ```spark.ml``` provides a uniform set of high-level APIs that help users create and tune machine learning pipelines.To learn more about ```spark.ml```, you can visit the [Apache Spark ML programming guide](https://archive.apache.org/dist/spark/docs/1.2.2/ml-guide.html).
37+
Spark's in-memory distributed computation capabilities make it a good choice for the iterative algorithms used in machine learning and graph computations. ```spark.ml``` provides a uniform set of high-level APIs that help users create and tune machine learning pipelines. To learn more about ```spark.ml```, you can visit the [Apache Spark ML programming guide](https://archive.apache.org/dist/spark/docs/1.2.2/ml-guide.html).
3838

3939
### Open-source libraries
4040
Every Apache Spark pool in Azure Synapse Analytics comes with a set of pre-loaded and popular machine learning libraries. Some of the relevant machine learning libraries that are included by default include:

0 commit comments

Comments
 (0)