You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-functions/functions-bindings-signalr-service-input.md
+5Lines changed: 5 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -134,6 +134,11 @@ public SignalRConnectionInfo negotiate(
134
134
135
135
:::zone-end
136
136
137
+
> [!Warning]
138
+
> For the simplicity, we omit the authentication and authorization parts in this sample. As a result, this endpoint is publicly accessible without any restrictions. To ensure the security of your negotiation endpoint, you should implement appropriate authentication and authorization mechanisms based on your specific requirements. For guidance on protecting your HTTP endpoints, see the following articles:
Copy file name to clipboardExpand all lines: articles/azure-signalr/signalr-concept-client-negotiation.md
+9-12Lines changed: 9 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -200,22 +200,19 @@ You can find a full sample on how to use the Management SDK to redirect SignalR
200
200
201
201
### Azure SignalR Service function extension
202
202
203
-
When you use an Azure function app, you can work with the function extension. Here's a sample of using `SignalRConnectionInfo` to help you build the negotiation response:
203
+
When you use an Azure function app, you can work with the function extension. Here's a sample of using `SignalRConnectionInfo`in C# isolated worker model to help you build the negotiation response:
> For the simplicity, we omit the authentication and authorization parts in this sample. As a result, this endpoint is publicly accessible without any restrictions. To ensure the security of your negotiation endpoint, you should implement appropriate authentication and authorization mechanisms based on your specific requirements. For guidance on protecting your HTTP endpoints, see the following articles:
> *[Authentication and authorization in Azure App Service and Azure Functions](../app-service/overview-authentication-authorization.md)
216
211
217
212
Then your clients can request the function endpoint `https://<Your Function App Name>.azurewebsites.net/api/negotiate` to get the service URL and access token. You can find a full sample on [GitHub](https://github.com/aspnet/AzureSignalR-samples/tree/main/samples/BidirectionChat).
218
213
214
+
For `SignalRConnectionInfo` input binding samples in other languages, see [Azure Functions SignalR Service input binding](../azure-functions/functions-bindings-signalr-service-input.md).
215
+
219
216
### Self-exposing `/negotiate` endpoint
220
217
221
218
You could also expose the negotiation endpoint in your own server and return the negotiation response by yourself if you are using other languages.
Copy file name to clipboardExpand all lines: articles/data-factory/format-excel.md
+9-6Lines changed: 9 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,10 +15,10 @@ ms.author: jianleishen
15
15
16
16
Follow this article when you want to **parse the Excel files**. The service supports both ".xls" and ".xlsx".
17
17
18
-
Excel format is supported for the following connectors: [Amazon S3](connector-amazon-simple-storage-service.md), [Amazon S3 Compatible Storage](connector-amazon-s3-compatible-storage.md), [Azure Blob](connector-azure-blob-storage.md), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md), [Azure Files](connector-azure-file-storage.md), [File System](connector-file-system.md), [FTP](connector-ftp.md), [Google Cloud Storage](connector-google-cloud-storage.md), [HDFS](connector-hdfs.md), [HTTP](connector-http.md), [Oracle Cloud Storage](connector-oracle-cloud-storage.md) and [SFTP](connector-sftp.md). It is supported as source but not sink.
18
+
Excel format is supported for the following connectors: [Amazon S3](connector-amazon-simple-storage-service.md), [Amazon S3 Compatible Storage](connector-amazon-s3-compatible-storage.md), [Azure Blob](connector-azure-blob-storage.md), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md), [Azure Files](connector-azure-file-storage.md), [File System](connector-file-system.md), [FTP](connector-ftp.md), [Google Cloud Storage](connector-google-cloud-storage.md), [HDFS](connector-hdfs.md), [HTTP](connector-http.md), [Oracle Cloud Storage](connector-oracle-cloud-storage.md) and [SFTP](connector-sftp.md). It's supported as source but not sink.
19
19
20
20
>[!NOTE]
21
-
>".xls" format is not supported while using [HTTP](connector-http.md).
21
+
>".xls" format isn't supported while using [HTTP](connector-http.md).
22
22
23
23
## Dataset properties
24
24
@@ -35,7 +35,7 @@ For a full list of sections and properties available for defining datasets, see
35
35
| nullValue | Specifies the string representation of null value. <br>The default value is **empty string**. | No |
36
36
| compression | Group of properties to configure file compression. Configure this section when you want to do compression/decompression during activity execution. | No |
37
37
| type<br/>(*under `compression`*) | The compression codec used to read/write JSON files. <br>Allowed values are **bzip2**, **gzip**, **deflate**, **ZipDeflate**, **TarGzip**, **Tar**, **snappy**, or **lz4**. Default is not compressed.<br>**Note** currently Copy activity doesn't support "snappy" & "lz4", and mapping data flow doesn't support "ZipDeflate", "TarGzip" and "Tar".<br>**Note** when using copy activity to decompress **ZipDeflate** file(s) and write to file-based sink data store, files are extracted to the folder: `<path specified in dataset>/<folder named as source zip file>/`. | No. |
38
-
| level<br/>(*under `compression`*) | The compression ratio. <br>Allowed values are **Optimal** or **Fastest**.<br>- **Fastest:** The compression operation should complete as quickly as possible, even if the resulting file is not optimally compressed.<br>- **Optimal**: The compression operation should be optimally compressed, even if the operation takes a longer time to complete. For more information, see [Compression Level](/dotnet/api/system.io.compression.compressionlevel) topic. | No |
38
+
| level<br/>(*under `compression`*) | The compression ratio. <br>Allowed values are **Optimal** or **Fastest**.<br>- **Fastest:** The compression operation should complete as quickly as possible, even if the resulting file isn't optimally compressed.<br>- **Optimal**: The compression operation should be optimally compressed, even if the operation takes a longer time to complete. For more information, see [Compression Level](/dotnet/api/system.io.compression.compressionlevel) topic. | No |
39
39
40
40
Below is an example of Excel dataset on Azure Blob Storage:
41
41
@@ -102,7 +102,7 @@ In mapping data flows, you can read Excel format in the following data stores: [
102
102
103
103
### Source properties
104
104
105
-
The below table lists the properties supported by an Excel source. You can edit these properties in the **Source options** tab. When using inline dataset, you will see additional file settings, which are the same as the properties described in [dataset properties](#dataset-properties) section.
105
+
The below table lists the properties supported by an Excel source. You can edit these properties in the **Source options** tab. When using inline dataset, you'll see additional file settings, which are the same as the properties described in [dataset properties](#dataset-properties) section.
106
106
107
107
| Name | Description | Required | Allowed values | Data flow script property |
@@ -112,7 +112,7 @@ The below table lists the properties supported by an Excel source. You can edit
112
112
| Column to store file name | Create a new column with the source file name and path | no | String | rowUrlColumn |
113
113
| After completion | Delete or move the files after processing. File path starts from the container root | no | Delete: `true` or `false` <br> Move: `['<from>', '<to>']`| purgeFiles <br> moveFiles |
114
114
| Filter by last modified | Choose to filter files based upon when they were last altered | no | Timestamp | modifiedAfter <br> modifiedBefore |
115
-
| Allow no files found | If true, an error is not thrown if no files are found | no |`true` or `false`| ignoreNoFilesFound |
115
+
| Allow no files found | If true, an error isn't thrown if no files are found | no |`true` or `false`| ignoreNoFilesFound |
> Mapping data flow doesn't support reading protected Excel files, as these files may contain confidentiality notices or enforce specific access restrictions that limit access to their contents.
150
+
148
151
## Handling very large Excel files
149
152
150
-
The Excel connector does not support streaming read for the Copy activity and must load the entire file into memory before data can be read. To import schema, preview data, or refresh an Excel dataset, the data must be returned before the http request timeout (100s). For large Excel files, these operations may not finish within that timeframe, causing a timeout error. If you want to move large Excel files (>100MB) into another data store, you can use one of following options to work around this limitation:
153
+
The Excel connector doesn't support streaming read for the Copy activity and must load the entire file into memory before data can be read. To import schema, preview data, or refresh an Excel dataset, the data must be returned before the http request timeout (100s). For large Excel files, these operations may not finish within that timeframe, causing a timeout error. If you want to move large Excel files (>100MB) into another data store, you can use one of following options to work around this limitation:
151
154
152
155
- Use the self-hosted integration runtime (SHIR), then use the Copy activity to move the large Excel file into another data store with the SHIR.
153
156
- Split the large Excel file into several smaller ones, then use the Copy activity to move the folder containing the files.
0 commit comments