Skip to content

Commit f58f246

Browse files
committed
2 parents d47b1b1 + a3a7f2f commit f58f246

File tree

4 files changed

+61
-55
lines changed

4 files changed

+61
-55
lines changed

articles/azure-arc/servers/prerequisites.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ This topic describes the basic requirements for installing the Connected Machine
1313

1414
Azure Arc-enabled servers support the installation of the Connected Machine agent on physical servers and virtual machines hosted outside of Azure. This includes support for virtual machines running on platforms like:
1515

16-
* VMware
16+
* VMware (including Azure VMware Solution)
1717
* Azure Stack HCI
1818
* Other cloud environments
1919

articles/sentinel/whats-new-archive.md

Lines changed: 52 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,58 @@ Noted features are currently in PREVIEW. The [Azure Preview Supplemental Terms](
2525
> You can also contribute! Join us in the [Azure Sentinel Threat Hunters GitHub community](https://github.com/Azure/Azure-Sentinel/wiki).
2626
2727

28+
## September 2021
29+
30+
- [Data connector health enhancements (Public preview)](#data-connector-health-enhancements-public-preview)
31+
- [New in docs: scaling data connector documentation](#new-in-docs-scaling-data-connector-documentation)
32+
- [Azure Storage account connector changes](#azure-storage-account-connector-changes)
33+
34+
### Data connector health enhancements (Public preview)
35+
36+
Azure Sentinel now provides the ability to enhance your data connector health monitoring with a new *SentinelHealth* table. The *SentinelHealth* table is created after you [turn on the Azure Sentinel health feature](monitor-sentinel-health.md) in your Azure Sentinel workspace, at the first success or failure health event generated.
37+
38+
For more information, see [Monitor the health of your data connectors with this Azure Sentinel workbook](monitor-data-connector-health.md).
39+
40+
> [!NOTE]
41+
> The *SentinelHealth* data table is currently supported only for selected data connectors. For more information, see [Supported data connectors](monitor-data-connector-health.md#supported-data-connectors).
42+
>
43+
44+
45+
### New in docs: scaling data connector documentation
46+
47+
As we continue to add more and more built-in data connectors for Azure Sentinel, we reorganized our data connector documentation to reflect this scaling.
48+
49+
For most data connectors, we replaced full articles that describe an individual connector with a series of generic procedures and a full reference of all currently supported connectors.
50+
51+
Check the [Azure Sentinel data connectors reference](data-connectors-reference.md) for details about your connector, including references to the relevant generic procedure, as well as extra information and configurations required.
52+
53+
For more information, see:
54+
55+
- **Conceptual information**: [Connect data sources](connect-data-sources.md)
56+
57+
- **Generic how-to articles**:
58+
59+
- [Connect to Azure, Windows, Microsoft, and Amazon services](connect-azure-windows-microsoft-services.md)
60+
- [Connect your data source to the Azure Sentinel Data Collector API to ingest data](connect-rest-api-template.md)
61+
- [Get CEF-formatted logs from your device or appliance into Azure Sentinel](connect-common-event-format.md)
62+
- [Collect data from Linux-based sources using Syslog](connect-syslog.md)
63+
- [Collect data in custom log formats to Azure Sentinel with the Log Analytics agent](connect-custom-logs.md)
64+
- [Use Azure Functions to connect your data source to Azure Sentinel](connect-azure-functions-template.md)
65+
- [Resources for creating Azure Sentinel custom connectors](create-custom-connector.md)
66+
67+
### Azure Storage account connector changes
68+
69+
Due to some changes made within the Azure Storage account resource configuration itself, the connector also needs to be reconfigured.
70+
The storage account (parent) resource has within it other (child) resources for each type of storage: files, tables, queues, and blobs.
71+
72+
When configuring diagnostics for a storage account, you must select and configure, in turn:
73+
- The parent account resource, exporting the **Transaction** metric.
74+
- Each of the child storage-type resources, exporting all the logs and metrics (see the table above).
75+
76+
You'll only see the storage types that you actually have defined resources for.
77+
78+
:::image type="content" source="media/whats-new/storage-diagnostics.png" alt-text="Screenshot of Azure Storage diagnostics configuration.":::
79+
2880
## August 2021
2981

3082
- [Advanced incident search (Public preview)](#advanced-incident-search-public-preview)

articles/sentinel/whats-new.md

Lines changed: 0 additions & 52 deletions
Original file line numberDiff line numberDiff line change
@@ -702,58 +702,6 @@ For more information, see:
702702
- [Azure Sentinel DHCP normalization schema reference (Public preview)](dhcp-normalization-schema.md)
703703
- [Normalization and the Azure Sentinel Information Model (ASIM)](normalization.md)
704704

705-
## September 2021
706-
707-
- [Data connector health enhancements (Public preview)](#data-connector-health-enhancements-public-preview)
708-
- [New in docs: scaling data connector documentation](#new-in-docs-scaling-data-connector-documentation)
709-
- [Azure Storage account connector changes](#azure-storage-account-connector-changes)
710-
711-
### Data connector health enhancements (Public preview)
712-
713-
Azure Sentinel now provides the ability to enhance your data connector health monitoring with a new *SentinelHealth* table. The *SentinelHealth* table is created after you [turn on the Azure Sentinel health feature](monitor-sentinel-health.md) in your Azure Sentinel workspace, at the first success or failure health event generated.
714-
715-
For more information, see [Monitor the health of your data connectors with this Azure Sentinel workbook](monitor-data-connector-health.md).
716-
717-
> [!NOTE]
718-
> The *SentinelHealth* data table is currently supported only for selected data connectors. For more information, see [Supported data connectors](monitor-data-connector-health.md#supported-data-connectors).
719-
>
720-
721-
722-
### New in docs: scaling data connector documentation
723-
724-
As we continue to add more and more built-in data connectors for Azure Sentinel, we reorganized our data connector documentation to reflect this scaling.
725-
726-
For most data connectors, we replaced full articles that describe an individual connector with a series of generic procedures and a full reference of all currently supported connectors.
727-
728-
Check the [Azure Sentinel data connectors reference](data-connectors-reference.md) for details about your connector, including references to the relevant generic procedure, as well as extra information and configurations required.
729-
730-
For more information, see:
731-
732-
- **Conceptual information**: [Connect data sources](connect-data-sources.md)
733-
734-
- **Generic how-to articles**:
735-
736-
- [Connect to Azure, Windows, Microsoft, and Amazon services](connect-azure-windows-microsoft-services.md)
737-
- [Connect your data source to the Azure Sentinel Data Collector API to ingest data](connect-rest-api-template.md)
738-
- [Get CEF-formatted logs from your device or appliance into Azure Sentinel](connect-common-event-format.md)
739-
- [Collect data from Linux-based sources using Syslog](connect-syslog.md)
740-
- [Collect data in custom log formats to Azure Sentinel with the Log Analytics agent](connect-custom-logs.md)
741-
- [Use Azure Functions to connect your data source to Azure Sentinel](connect-azure-functions-template.md)
742-
- [Resources for creating Azure Sentinel custom connectors](create-custom-connector.md)
743-
744-
### Azure Storage account connector changes
745-
746-
Due to some changes made within the Azure Storage account resource configuration itself, the connector also needs to be reconfigured.
747-
The storage account (parent) resource has within it other (child) resources for each type of storage: files, tables, queues, and blobs.
748-
749-
When configuring diagnostics for a storage account, you must select and configure, in turn:
750-
- The parent account resource, exporting the **Transaction** metric.
751-
- Each of the child storage-type resources, exporting all the logs and metrics (see the table above).
752-
753-
You'll only see the storage types that you actually have defined resources for.
754-
755-
:::image type="content" source="media/whats-new/storage-diagnostics.png" alt-text="Screenshot of Azure Storage diagnostics configuration.":::
756-
757705
## Next steps
758706

759707
> [!div class="nextstepaction"]

articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -178,7 +178,7 @@ The error "Invalid object name 'table name'" indicates that you're using an obje
178178
- If you don't see the object, maybe you're trying to query a table from a lake or Spark database. The table might not be available in the serverless SQL pool because:
179179

180180
- The table has some column types that can't be represented in serverless SQL pool.
181-
- The table has a format that isn't supported in serverless SQL pool. Examples are Delta or ORC.
181+
- The table has a format that isn't supported in serverless SQL pool. Examples are Avro or ORC.
182182

183183
### Unclosed quotation mark after the character string
184184

@@ -854,7 +854,7 @@ There are some limitations and known issues that you might see in Delta Lake sup
854854
- Make sure that you're referencing the root Delta Lake folder in the [OPENROWSET](./develop-openrowset.md) function or external table location.
855855
- The root folder must have a subfolder named `_delta_log`. The query fails if there's no `_delta_log` folder. If you don't see that folder, you're referencing plain Parquet files that must be [converted to Delta Lake](../spark/apache-spark-delta-lake-overview.md?pivots=programming-language-python#convert-parquet-to-delta) by using Apache Spark pools.
856856
- Don't specify wildcards to describe the partition schema. The Delta Lake query automatically identifies the Delta Lake partitions.
857-
- Delta Lake tables created in the Apache Spark pools aren't automatically available in serverless SQL pool. To query such Delta Lake tables by using the T-SQL language, run the [CREATE EXTERNAL TABLE](./create-use-external-tables.md#delta-lake-external-table) statement and specify Delta as the format.
857+
- Delta Lake tables created in the Apache Spark pools are automatically available in serverless SQL pool, but the schema is not updated. If you add the columns in hte Delta table using Spark pool, the changes will not be shown in serverless database.
858858
- External tables don't support partitioning. Use [partitioned views](create-use-views.md#delta-lake-partitioned-views) on the Delta Lake folder to use the partition elimination. See known issues and workarounds later in the article.
859859
- Serverless SQL pools don't support time travel queries. Use Apache Spark pools in Synapse Analytics to [read historical data](../spark/apache-spark-delta-lake-overview.md?pivots=programming-language-python#read-older-versions-of-data-using-time-travel).
860860
- Serverless SQL pools don't support updating Delta Lake files. You can use serverless SQL pool to query the latest version of Delta Lake. Use Apache Spark pools in Synapse Analytics to [update Delta Lake](../spark/apache-spark-delta-lake-overview.md?pivots=programming-language-python#update-table-data).
@@ -901,6 +901,12 @@ There are two options available to circumvent this error:
901901

902902
Our engineering team is currently working on a full support for Spark 3.3.
903903

904+
### Delta tables in Lake databases do not have identical schema in Spark and serverless pools
905+
906+
Serverless SQL pools enable you to access Parquet, CSV, and Delta tables that are created in Lake database using Spark or Synapse designer. Accessing the Delta tables is still in public preview, and currently serverless will synchronize a Delta table with Spark at the time of creation but will not update the schema if the columns are added later using the `ALTER TABLE` statement in Spark.
907+
908+
This is a public preview limitation. Drop and re-create the Delta table in Spark (if it is possible) instead of altering tables to resolve this issue.
909+
904910
## Performance
905911

906912
Serverless SQL pool assigns the resources to the queries based on the size of the dataset and query complexity. You can't change or limit the resources that are provided to the queries. There are some cases where you might experience unexpected query performance degradations and you might have to identify the root causes.

0 commit comments

Comments
 (0)