You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/synapse-analytics/database-designer/quick-start-create-lake-database.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ ms.reviewer: wiassaf, jovanpop
7
7
ms.service: azure-synapse-analytics
8
8
ms.subservice: database-editor
9
9
ms.topic: quickstart
10
-
ms.date: 08/16/2022
10
+
ms.date: 12/31/2024
11
11
ms.custom: template-concept
12
12
---
13
13
@@ -23,9 +23,9 @@ This quick start gives you a complete sample scenario on how you can apply datab
23
23
24
24
## Create a lake database from database templates
25
25
26
-
Use the new database templates functionality to create a lake database that you can use to configure your data model for the database.
26
+
Use the new database templates functionality to create a lake database that you can use to configure your data model for the database.
27
27
28
-
For our scenario we will use the `Retail` database template and select the following entities:
28
+
For our scenario we'll use the `Retail` database template and select the following entities:
29
29
30
30
-**RetailProduct** - A product is anything that can be offered to a market that might satisfy a need by potential customers. That product is the sum of all physical, psychological, symbolic, and service attributes associated with it.
31
31
-**Transaction** - The lowest level of executable work or customer activity.
@@ -43,7 +43,7 @@ The easiest way to find entities is by using the search box above the different
43
43
44
44
After you have created the database, make sure the storage account and the filepath is set to a location where you wish to store the data. The path will default to the primary storage account within Azure Synapse Analytics but can be changed to your needs.
45
45
46
-
:::image type="content" source="./media/quick-start-create-lake-database/lake-database-example.png" alt-text="Screenshot of an individual entity properties in the Retail database template." lightbox="./media/quick-start-create-lake-database/lake-database-example.png":::
46
+
:::image type="content" source="./media/quick-start-create-lake-database/lake-database-example.png" alt-text="Screenshot of an individual entity property in the Retail database template." lightbox="./media/quick-start-create-lake-database/lake-database-example.png":::
47
47
48
48
To save your layout and make it available within Azure Synapse, **Publish** all changes. This step completes the setup of the lake database and makes it available to all components within Azure Synapse Analytics and outside.
Copy file name to clipboardExpand all lines: articles/synapse-analytics/migration-guides/netezza/7-beyond-data-warehouse-migration.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -83,7 +83,7 @@ You can use these features without writing any code, or you can add custom code
83
83
:::image type="content" source="../media/7-beyond-data-warehouse-migration/azure-data-factory-pipeline.png" border="true" alt-text="Screenshot of an example of a Data Factory pipeline." lightbox="../media/7-beyond-data-warehouse-migration/azure-data-factory-pipeline-lrg.png":::
84
84
85
85
>[!TIP]
86
-
>Data Factory lets you to build scalable data integration pipelines without code.
86
+
>Data Factory lets you build scalable data integration pipelines without code.
87
87
88
88
Implement Data Factory pipeline development from any of several places, including:
Copy file name to clipboardExpand all lines: articles/synapse-analytics/migration-guides/oracle/7-beyond-data-warehouse-migration.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -83,7 +83,7 @@ You can use these features without writing any code, or you can add custom code
83
83
:::image type="content" source="../media/7-beyond-data-warehouse-migration/azure-data-factory-pipeline.png" border="true" alt-text="Screenshot of an example of a Data Factory pipeline." lightbox="../media/7-beyond-data-warehouse-migration/azure-data-factory-pipeline-lrg.png":::
84
84
85
85
>[!TIP]
86
-
>Data Factory lets you to build scalable data integration pipelines without code.
86
+
>Data Factory lets you build scalable data integration pipelines without code.
87
87
88
88
Implement Data Factory pipeline development from any of several places, including:
Copy file name to clipboardExpand all lines: articles/synapse-analytics/migration-guides/teradata/7-beyond-data-warehouse-migration.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -83,7 +83,7 @@ You can use these features without writing any code, or you can add custom code
83
83
:::image type="content" source="../media/7-beyond-data-warehouse-migration/azure-data-factory-pipeline.png" border="true" alt-text="Screenshot of an example of a Data Factory pipeline." lightbox="../media/7-beyond-data-warehouse-migration/azure-data-factory-pipeline-lrg.png":::
84
84
85
85
>[!TIP]
86
-
>Data Factory lets you to build scalable data integration pipelines without code.
86
+
>Data Factory lets you build scalable data integration pipelines without code.
87
87
88
88
Implement Data Factory pipeline development from any of several places, including:
Copy file name to clipboardExpand all lines: articles/synapse-analytics/spark/apache-spark-machine-learning-concept.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,7 +34,7 @@ There are several options when training machine learning models using Azure Spar
34
34
Learn more about the machine learning capabilities by viewing the article on how to [train models in Azure Synapse Analytics](../spark/apache-spark-machine-learning-training.md).
35
35
36
36
### SparkML and MLlib
37
-
Spark's in-memory distributed computation capabilities make it a good choice for the iterative algorithms used in machine learning and graph computations. ```spark.ml``` provides a uniform set of high-level APIs that help users create and tune machine learning pipelines.To learn more about ```spark.ml```, you can visit the [Apache Spark ML programming guide](https://archive.apache.org/dist/spark/docs/1.2.2/ml-guide.html).
37
+
Spark's in-memory distributed computation capabilities make it a good choice for the iterative algorithms used in machine learning and graph computations. ```spark.ml``` provides a uniform set of high-level APIs that help users create and tune machine learning pipelines.To learn more about ```spark.ml```, you can visit the [Apache Spark ML programming guide](https://archive.apache.org/dist/spark/docs/1.2.2/ml-guide.html).
38
38
39
39
### Open-source libraries
40
40
Every Apache Spark pool in Azure Synapse Analytics comes with a set of pre-loaded and popular machine learning libraries. Some of the relevant machine learning libraries that are included by default include:
0 commit comments