Skip to content

Commit ac84e4e

Browse files
committed
fixed links
1 parent 42129cb commit ac84e4e

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

articles/stream-analytics/stream-analytics-define-inputs.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ The following table explains each property in the **New input** page in the Azur
5050
| **Event Hub namespace** | The Event Hubs namespace is a container for event hubs. When you create an event hub, you also create the namespace. |
5151
| **Event Hub name** | The name of the event hub to use as input. |
5252
| **Event Hub consumer group** (recommended) | We recommend that you use a distinct consumer group for each Stream Analytics job. This string identifies the consumer group to use to ingest data from the event hub. If no consumer group is specified, the Stream Analytics job uses the `$Default` consumer group. |
53-
| **Authentication mode** | Specify the type of the authentication you want to use to connect to the event hub. You can use a connection string or a managed identity to authenticate with the event hub. For the managed identity option, you can either create a system-assigned managed identity to the Stream Analytics job or a user-assigned managed identity to authenticate with the event hub. When you use a managed identity, the managed identity must be a member of [Azure Event Hubs Data Receiver or Azure Event Hubs Data Owner roles](/event-hubs/authenticate-application#built-in-roles-for-azure-event-hubs). |
53+
| **Authentication mode** | Specify the type of the authentication you want to use to connect to the event hub. You can use a connection string or a managed identity to authenticate with the event hub. For the managed identity option, you can either create a system-assigned managed identity to the Stream Analytics job or a user-assigned managed identity to authenticate with the event hub. When you use a managed identity, the managed identity must be a member of [Azure Event Hubs Data Receiver or Azure Event Hubs Data Owner roles](../event-hubs/authenticate-application.md#built-in-roles-for-azure-event-hubs). |
5454
| **Event Hub policy name** | The shared access policy that provides access to the Event Hubs. Each shared access policy has a name, permissions that you set, and access keys. This option is automatically populated, unless you select the option to provide the Event Hubs settings manually.|
5555
| **Partition key** | It's an optional field that is available only if your job is configured to use [compatibility level](./stream-analytics-compatibility-level.md) 1.2 or higher. If your input is partitioned by a property, you can add the name of this property here. It's used for improving the performance of your query if it includes a `PARTITION BY` or `GROUP BY` clause on this property. If this job uses compatibility level 1.2 or higher, this field defaults to `PartitionId.` |
5656
| **Event serialization format** | The serialization format (JSON, CSV, Avro, Parquet, or [Other (Protobuf, XML, proprietary...)](custom-deserializer.md)) of the incoming data stream. Ensure the JSON format aligns with the specification and doesn’t include leading 0 for decimal numbers. |
@@ -154,7 +154,7 @@ The following table explains each property in the **New input** page in the Azur
154154
| **Storage account** | The name of the storage account where the blob files are located. |
155155
| **Storage account key** | The secret key associated with the storage account. This option is automatically populated in unless you select the option to provide the settings manually. |
156156
| **Container** | Containers provide a logical grouping for blobs. You can choose either **Use existing** container or **Create new** to have a new container created.|
157-
| **Authentication mode** | Specify the type of the authentication you want to use to connect to the storage account. You can use a connection string or a managed identity to authenticate with the storage account. For the managed identity option, you can either create a system-assigned managed identity to the Stream Analytics job or a user-assigned managed identity to authenticate with the storage account. When you use a managed identity, the managed identity must be a member of an appropriate role on the storage account. |
157+
| **Authentication mode** | Specify the type of the authentication you want to use to connect to the storage account. You can use a connection string or a managed identity to authenticate with the storage account. For the managed identity option, you can either create a system-assigned managed identity to the Stream Analytics job or a user-assigned managed identity to authenticate with the storage account. When you use a managed identity, the managed identity must be a member of an [appropriate role](/azure/role-based-access-control/built-in-roles#storage) on the storage account. |
158158
| **Path pattern** (optional) | The file path used to locate the blobs within the specified container. If you want to read blobs from the root of the container, don't set a path pattern. Within the path, you can specify one or more instances of the following three variables: `{date}`, `{time}`, or `{partition}`<br/><br/>Example 1: `cluster1/logs/{date}/{time}/{partition}`<br/><br/>Example 2: `cluster1/logs/{date}`<br/><br/>The `*` character isn't an allowed value for the path prefix. Only valid <a href="/rest/api/storageservices/Naming-and-Referencing-Containers--Blobs--and-Metadata">Azure blob characters</a> are allowed. Don't include container names or file names. |
159159
| **Date format** (optional) | If you use the date variable in the path, the date format in which the files are organized. Example: `YYYY/MM/DD` <br/><br/> When blob input has `{date}` or `{time}` in its path, the folders are looked at in ascending time order.|
160160
| **Time format** (optional) | If you use the time variable in the path, the time format in which the files are organized. Currently the only supported value is `HH` for hours. |

0 commit comments

Comments
 (0)