Skip to content

Commit ebac33b

Browse files
authored
Merge pull request #386 from VirtualMetric:DT-566-iam-roles-documentation
DT-566-Add IAM Permissions sections to 24 target and device docs
2 parents 915ad82 + 4d26502 commit ebac33b

27 files changed

+652
-14
lines changed

.github/workflows/pr.yml

Lines changed: 6 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -29,15 +29,13 @@ jobs:
2929
- name: Build documentation
3030
run: npm run build
3131

32-
- name: Install Wrangler
33-
run: npm install -g wrangler@4
34-
3532
- name: Deploy Preview to Cloudflare Pages
3633
id: deploy
37-
run: wrangler pages deploy build --project-name=virtualmetric-docs --branch=${{ github.head_ref }} --commit-dirty=true
38-
env:
39-
CLOUDFLARE_API_TOKEN: ${{ secrets.CLOUDFLARE_API_TOKEN }}
40-
CLOUDFLARE_ACCOUNT_ID: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
34+
uses: cloudflare/wrangler-action@v3
35+
with:
36+
apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}
37+
accountId: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
38+
command: pages deploy build --project-name=virtualmetric-docs --branch=${{ github.head_ref }} --commit-dirty=true
4139

4240
- name: Remove old deployment comments
4341
uses: izhangzhihao/delete-comment@master
@@ -50,4 +48,4 @@ jobs:
5048
uses: thollander/actions-comment-pull-request@v2
5149
with:
5250
message: |
53-
📄 **Docs Preview:** ${{ steps.deploy.outputs.deployment-url }}
51+
📄 **Docs Preview:** ${{ steps.deploy.outputs.pages-deployment-alias-url }}

docs/configuration/devices/amazon-s3.mdx

Lines changed: 55 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -95,6 +95,61 @@ Avoid hardcoding `access_key_id` and `secret_access_key` in plain text. Prefer I
9595

9696
## Details
9797

98+
### IAM Permissions
99+
100+
When using IAM role-based authentication, the following permissions are required:
101+
102+
|IAM Action|Purpose|
103+
|---|---|
104+
|`s3:GetObject`|Download S3 objects for processing|
105+
|`s3:ListBucket`|Validate bucket access at startup (when bucket name is configured)|
106+
|`s3:ListAllMyBuckets`|Validate S3 access at startup (when bucket name is not configured)|
107+
|`sqs:ReceiveMessage`|Poll SQS queue for S3 event notifications|
108+
|`sqs:DeleteMessage`|Remove processed messages from the queue|
109+
|`sqs:GetQueueAttributes`|Validate SQS queue connectivity at startup|
110+
111+
When using cross-account role assumption (`role_arn`), the calling identity also requires:
112+
113+
|IAM Action|Purpose|
114+
|---|---|
115+
|`sts:AssumeRole`|Assume IAM role in the target account|
116+
117+
**Minimum IAM Policy**:
118+
119+
```json
120+
{
121+
"Version": "2012-10-17",
122+
"Statement": [
123+
{
124+
"Sid": "S3ReadObjects",
125+
"Effect": "Allow",
126+
"Action": "s3:GetObject",
127+
"Resource": "arn:aws:s3:::BUCKET_NAME/*"
128+
},
129+
{
130+
"Sid": "S3ValidateBucket",
131+
"Effect": "Allow",
132+
"Action": "s3:ListBucket",
133+
"Resource": "arn:aws:s3:::BUCKET_NAME"
134+
},
135+
{
136+
"Sid": "SQSConsumeMessages",
137+
"Effect": "Allow",
138+
"Action": [
139+
"sqs:ReceiveMessage",
140+
"sqs:DeleteMessage",
141+
"sqs:GetQueueAttributes"
142+
],
143+
"Resource": "arn:aws:sqs:REGION:ACCOUNT_ID:QUEUE_NAME"
144+
}
145+
]
146+
}
147+
```
148+
149+
:::note Cross-Account Access
150+
When accessing S3 buckets in another AWS account, configure `role_arn` and optionally use temporary credentials. The assumed role must have the S3 and SQS permissions above. The target role's trust policy must allow assumption from the source account, with optional `ExternalId` condition for Security Lake scenarios.
151+
:::
152+
98153
The Amazon S3 device operates as an event-driven pull-type data source that processes S3 objects based on SQS notifications. The device continuously polls an SQS queue for S3 event messages, downloads the referenced objects, and processes their contents through the telemetry pipeline.
99154

100155
**Event Processing Flow**: The device receives S3 event notifications from SQS containing bucket name and object key information. For each ObjectCreated event (Put, Post, Copy, CompleteMultipartUpload), the device downloads the S3 object and processes it according to its file type. After successful processing, the SQS message is deleted to prevent reprocessing.

docs/configuration/devices/amazon-security-lake.mdx

Lines changed: 68 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -100,6 +100,74 @@ AWS enforces strict SQS parameter limits. Values outside allowed ranges are auto
100100

101101
## Details
102102

103+
### IAM Permissions
104+
105+
When using IAM credentials or role assumption, the following permissions are required:
106+
107+
**SQS Permissions**:
108+
109+
|IAM Action|Purpose|
110+
|---|---|
111+
|`sqs:ReceiveMessage`|Poll queue for S3 object notifications|
112+
|`sqs:DeleteMessage`|Remove processed messages from the queue|
113+
|`sqs:GetQueueAttributes`|Validate SQS connectivity at startup|
114+
115+
**S3 Permissions**:
116+
117+
|IAM Action|Purpose|
118+
|---|---|
119+
|`s3:GetObject`|Download Parquet files from the Security Lake bucket|
120+
|`s3:ListBucket`|Validate access to a specific bucket at startup|
121+
|`s3:ListAllMyBuckets`|Validate S3 access at startup (when no specific bucket is configured)|
122+
123+
**STS Permissions (conditional)**:
124+
125+
|IAM Action|Purpose|
126+
|---|---|
127+
|`sts:AssumeRole`|Assume a cross-account IAM role (when `role_arn` is configured)|
128+
129+
**Minimum IAM Policy**:
130+
131+
```json
132+
{
133+
"Version": "2012-10-17",
134+
"Statement": [
135+
{
136+
"Sid": "SQSAccess",
137+
"Effect": "Allow",
138+
"Action": [
139+
"sqs:ReceiveMessage",
140+
"sqs:DeleteMessage",
141+
"sqs:GetQueueAttributes"
142+
],
143+
"Resource": "arn:aws:sqs:REGION:ACCOUNT_ID:QUEUE_NAME"
144+
},
145+
{
146+
"Sid": "S3ReadAccess",
147+
"Effect": "Allow",
148+
"Action": "s3:GetObject",
149+
"Resource": "arn:aws:s3:::SECURITY_LAKE_BUCKET/*"
150+
},
151+
{
152+
"Sid": "S3ValidateBucket",
153+
"Effect": "Allow",
154+
"Action": "s3:ListBucket",
155+
"Resource": "arn:aws:s3:::SECURITY_LAKE_BUCKET"
156+
},
157+
{
158+
"Sid": "S3ListBuckets",
159+
"Effect": "Allow",
160+
"Action": "s3:ListAllMyBuckets",
161+
"Resource": "*"
162+
}
163+
]
164+
}
165+
```
166+
167+
:::note Cross-Account Role Assumption
168+
Security Lake typically requires cross-account access via `role_arn` with `external_id`. The calling identity needs `sts:AssumeRole` on the target role. The target role's trust policy must allow assumption from the source account with the configured `ExternalId` condition. The assumed role must have the S3 and SQS permissions above attached to it.
169+
:::
170+
103171
The Amazon Security Lake device implements a pull-type consumer pattern that integrates with Amazon Security Lake's S3-backed architecture. Security Lake stores normalized security data in OCSF format as Parquet files, and publishes S3 ObjectCreated events to an SQS queue. The device polls this queue, downloads referenced Parquet files, and ingests OCSF events into DataStream.
104172

105173
**OCSF Schema Validation**: When enabled, the device validates each Parquet record against OCSF schema requirements. Invalid records generate warnings but do not halt file processing. Disable validation for performance-critical scenarios or when processing pre-validated data.

docs/configuration/devices/azure-blob-storage.mdx

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -71,6 +71,23 @@ Avoid hardcoding `connection_string` and `client_secret` in plain text. Prefer r
7171

7272
## Details
7373

74+
### IAM Permissions
75+
76+
When using service principal authentication, the following Azure RBAC roles are required:
77+
78+
|Azure Role|Scope|Purpose|
79+
|---|---|---|
80+
|`Storage Blob Data Reader`|Storage Account or Container|Read blobs and blob properties|
81+
|`Storage Queue Data Message Processor`|Storage Account or Queue|Dequeue and delete queue messages|
82+
83+
:::note Connection String Authentication
84+
When using connection string authentication, Azure RBAC roles are not applicable. The shared key embedded in the connection string provides full storage account access.
85+
:::
86+
87+
:::note Startup Validation
88+
The device validates connectivity at startup by reading blob service properties and queue metadata. The recommended roles above may not fully cover these validation calls. If startup validation fails, either use a custom role with the exact data actions or assign `Storage Queue Data Contributor` instead of `Storage Queue Data Message Processor` for broader queue access.
89+
:::
90+
7491
The Azure Blob Storage device operates as a pull-type data source that periodically scans Azure storage containers for new files. The device supports multiple file formats and provides flexible authentication options for enterprise environments.
7592

7693
**File Format Processing**: The device automatically detects and processes files based on the configured format. JSON files are parsed as individual objects, JSONL files process each line as a separate record, and Parquet files are read using columnar processing for efficient large-data handling.

docs/configuration/devices/event-hubs.mdx

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -118,6 +118,23 @@ To enhance performance and achieve better message handling, the following settin
118118
|`reuse`|N|`true`|Enable multi-worker mode|
119119
|`workers`|N|`4`|Number of worker processes when reuse enabled|
120120

121+
## Details
122+
123+
### IAM Permissions
124+
125+
When using service principal authentication, the following Azure RBAC roles are required:
126+
127+
|Azure Role|Scope|Purpose|
128+
|---|---|---|
129+
|`Azure Event Hubs Data Receiver`|Event Hubs Namespace or Event Hub|Consume events and read hub properties|
130+
|`Storage Blob Data Contributor`|Storage Account or Container|Read, write, and list checkpoint blobs|
131+
132+
The checkpoint storage requires `Contributor` (not just `Reader`) because the device writes checkpoint state and manages ownership blobs for partition load balancing.
133+
134+
:::note Connection String Authentication
135+
When using connection string authentication, Azure RBAC roles are not needed for Event Hubs access. The Shared Access Policy embedded in the connection string governs access (typically `Listen` claim for consumers). Checkpoint storage still requires either a connection string or RBAC role assignment.
136+
:::
137+
121138
## Key Features
122139

123140
### Multiple Workers

docs/configuration/devices/microsoft-sentinel.mdx

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -64,6 +64,22 @@ The following fields are used to define the device:
6464
|---|---|---|---|
6565
|`batch_size`|N|`1000`|Number of incidents to fetch per batch|
6666

67+
## Details
68+
69+
### IAM Permissions
70+
71+
The service principal requires the following Azure RBAC role:
72+
73+
|Azure Role|Scope|Purpose|
74+
|---|---|---|
75+
|`Microsoft Sentinel Reader`|Log Analytics Workspace|Read incidents from Microsoft Sentinel|
76+
77+
The device is strictly read-only — it only lists incidents using `Microsoft.SecurityInsights/incidents/read`. No create, update, or delete operations are performed.
78+
79+
:::note Scope
80+
Assign the role at the Log Analytics Workspace level rather than the subscription or resource group level for least-privilege access.
81+
:::
82+
6783
## Key Features
6884

6985
### Incidents

docs/configuration/targets/_managed-identity.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ Azure targets support **Managed Identity** authentication for credential-free ac
1717
- Azure Kubernetes Service (AKS)
1818
- Azure Functions
1919

20-
**Required permissions**: The Managed Identity must be granted appropriate roles on the target Azure resource (e.g., `Storage Blob Data Contributor` for Blob Storage, `Azure Event Hubs Data Sender` for Event Hubs).
20+
**Required permissions**: The Managed Identity must be granted the appropriate Azure RBAC roles documented in each target's **IAM Permissions** section.
2121

2222
:::note
2323
Managed Identity eliminates credential management overhead and is the recommended authentication method for Azure-hosted Director deployments.
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
:::note
2+
Per the AWS Service Authorization Reference, `s3:PutObject` covers `CreateMultipartUpload`, `UploadPart`, and `CompleteMultipartUpload`. Only `s3:AbortMultipartUpload` requires its own IAM action.
3+
:::

docs/configuration/targets/aws/amazon-cloudwatch.mdx

Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -90,6 +90,48 @@ Amazon CloudWatch Logs supports a maximum of 10,000 log events per PutLogEvents
9090

9191
Supports static credentials (access key and secret key) with optional session tokens for temporary credentials. When deployed on AWS infrastructure, can leverage IAM role-based authentication without explicit credentials.
9292

93+
All authentication methods call `sts:GetCallerIdentity` during initialization to validate credentials before proceeding.
94+
95+
### IAM Permissions
96+
97+
When using IAM role-based authentication, the following permissions are required:
98+
99+
|IAM Action|Purpose|
100+
|---|---|
101+
|`sts:GetCallerIdentity`|Validate credentials at initialization|
102+
|`logs:CreateLogGroup`|Create log group if it does not exist|
103+
|`logs:CreateLogStream`|Create log stream if it does not exist|
104+
|`logs:PutLogEvents`|Send log events to the stream|
105+
106+
Minimum IAM policy:
107+
108+
```json
109+
{
110+
"Version": "2012-10-17",
111+
"Statement": [
112+
{
113+
"Sid": "STSIdentity",
114+
"Effect": "Allow",
115+
"Action": "sts:GetCallerIdentity",
116+
"Resource": "*"
117+
},
118+
{
119+
"Sid": "CloudWatchLogsWrite",
120+
"Effect": "Allow",
121+
"Action": [
122+
"logs:CreateLogGroup",
123+
"logs:CreateLogStream",
124+
"logs:PutLogEvents"
125+
],
126+
"Resource": [
127+
"arn:aws:logs:REGION:ACCOUNT_ID:log-group:LOG_GROUP_NAME",
128+
"arn:aws:logs:REGION:ACCOUNT_ID:log-group:LOG_GROUP_NAME:log-stream:*"
129+
]
130+
}
131+
]
132+
}
133+
```
134+
93135
### Log Groups and Streams
94136

95137
CloudWatch Logs organizes log data into log groups and log streams:

docs/configuration/targets/aws/amazon-kinesis.mdx

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -92,6 +92,39 @@ Amazon Kinesis Data Streams is a fully managed streaming data service that captu
9292

9393
Supports static credentials (access key and secret key) with optional session tokens for temporary credentials. When deployed on AWS infrastructure, can leverage IAM role-based authentication without explicit credentials.
9494

95+
All authentication methods call `sts:GetCallerIdentity` during initialization to validate credentials before proceeding.
96+
97+
### IAM Permissions
98+
99+
When using IAM role-based authentication, the following permissions are required:
100+
101+
|IAM Action|Purpose|
102+
|---|---|
103+
|`sts:GetCallerIdentity`|Validate credentials at initialization|
104+
|`kinesis:PutRecords`|Send batch of records to stream|
105+
106+
Minimum IAM policy:
107+
108+
```json
109+
{
110+
"Version": "2012-10-17",
111+
"Statement": [
112+
{
113+
"Sid": "STSIdentity",
114+
"Effect": "Allow",
115+
"Action": "sts:GetCallerIdentity",
116+
"Resource": "*"
117+
},
118+
{
119+
"Sid": "KinesisWrite",
120+
"Effect": "Allow",
121+
"Action": "kinesis:PutRecords",
122+
"Resource": "arn:aws:kinesis:REGION:ACCOUNT_ID:stream/STREAM_NAME"
123+
}
124+
]
125+
}
126+
```
127+
95128
### Stream and Shard Architecture
96129

97130
Kinesis Data Streams uses shards as the base throughput unit. Each shard provides:

0 commit comments

Comments
 (0)