Skip to content

Commit f0ba303

Browse files
committed
Added some new fields audit fields to accept.
1 parent dbefa37 commit f0ba303

File tree

2 files changed

+42
-22
lines changed

2 files changed

+42
-22
lines changed

Monitoring/ingest_nas_audit_logs_into_cloudwatch/README.md

Lines changed: 29 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -26,20 +26,28 @@ systems that you want to ingest the audit logs from.
2626
}
2727
```
2828
- You have applied the necessary SACLs to the files you want to audit. The knowledge base article linked above provides guidance on how to do this.
29+
- Since the Lambda function runs within your VPC it will not have access to the Internet, even if you can access the Internet from the Subnet it run from.
30+
Therefore, there needs to be an VPC endpoint for all the AWS services that the Lambda function uses. Specifically, the Lambda function needs to be able to access the following services:
31+
- FSx.
32+
- Secrets Manager.
33+
- CloudWatch Logs.
34+
- S3 - Note that typically there is a Gateway type VPC endpoint for S3, so you should not need to create a VPC endpoint for S3.
35+
- EC2.
2936
- You have created a role with the necessary permissions to allow the Lambda function to do the following:
3037

3138
<table>
3239
<tr><th>Service</td><th>Actions</td><th>Resources</th></tr>
33-
<tr><td>fsx</td><td>fsx:DescribeFileSystems</td><td>*</td></tr>
34-
<tr><td rowspan="3">ec2</td><td>DescribeNetworkInterfaces</td><td>*</td></tr>
35-
<tr><td>CreateNetworkInterface</td><td>arn:aws:ec2:*:&lt;accountID&gt;:*</td></tr>
36-
<tr><td>DeleteNetworkInterface</td><td>arn:aws:ec2:*:&lt;accountID&gt;:*</td></tr>
37-
<tr><td rowspan="2">logs</td><td>CreateLogStream </td><td> arn:aws:logs:&lt;region&gt;:&lt;accountID&gt;:log-group:&lt;logGroupName&gt;:* </td></tr>
38-
<tr><td>PutLogEvents </td><td> arn:aws:logs:&lt;region&gt;:&lt;accountID&gt;:log-group:&lt;logGroupName&gt;:* </td></tr>
39-
<tr><td rowspan="3"> s3 </td><td> ListBucket </td><td> arn:aws:s3:&lt;region&gt;:&lt;accountID&gt;:* </td></tr>
40-
<tr><td>GetObject </td><td> arn:aws:s3:&lt;region>:&lt;accountID&gt;:*/* </td></tr>
41-
<tr><td>PutObject </td><td> arn:aws:s3:&lt;region>:&lt;accountID&gt;:*/* </td></tr>
42-
<tr><td>secretsmanager </td><td> GetSecretValue </td><td> arn:aws:secretsmanager:&lt;region&gt;:&lt;accountID&gt;:secret:&lt;secretName&gt;</td></tr>
40+
<tr><td>Fsx</td><td>fsx:DescribeFileSystems</td><td>\*</td></tr>
41+
<tr><td rowspan="3">ec2</td><td>DescribeNetworkInterfaces</td><td>\*</td></tr>
42+
<tr><td>CreateNetworkInterface</td><td>arn:aws:ec2:&lt;region&gt;:&lt;accountID&gt;:\*</td></tr>
43+
<tr><td>DeleteNetworkInterface</td><td>arn:aws:ec2:&lt;region&gt;:&lt;accountID&gt;:\*</td></tr>
44+
<tr><td rowspan="3">CloudWatch Logs</td><td>CreateLogGroup</td><td rowspan="3">arn:aws:logs:&lt;region&gt;:&lt;accountID&gt;:log-group:\* </td></tr>
45+
<tr><td>CreateLogStream</td></tr>
46+
<tr><td>PutLogEvents</td></tr>
47+
<tr><td rowspan="3">s3</td><td> ListBucket</td><td> arn:aws:s3:&lt;region&gt;:&lt;accountID&gt;:* </td></tr>
48+
<tr><td>GetObject</td><td rowspan="2">arn:aws:s3:&lt;region>:&lt;accountID&gt;:*/* </td></tr>
49+
<tr><td>PutObject</td></tr>
50+
<tr><td>Secrets Manager</td><td> GetSecretValue </td><td>arn:aws:secretsmanager:&lt;region&gt;:&lt;accountID&gt;:secret:&lt;secretName&gt\*;</td></tr>
4351
</table>
4452
Where:
4553

@@ -48,6 +56,11 @@ Where:
4856
- &lt;logGroupName&gt; - is the name of the CloudWatch log group where the audit logs will be ingested.
4957
- &lt;secretName&gt; - is the name of the secret that contains the credentials for the fsxadmin accounts.
5058

59+
Notes:
60+
- Since the Lambda function runs within your VPC it needs to be able to create an delete network interfaces.
61+
- It needs to be able to create a log groups so it can create a log group for the diagnostic output from the Lambda function.
62+
- Since the ARN of any Secrets Manager secret has random characters at the end of it, you must add the `*` at the end.
63+
5164
## Deployment
5265
1. Create a Lambda deployment package by:
5366
1. Downloading the `ingest_fsx_audit_logs.py` file from this repository and placing it in an empty directory.
@@ -68,14 +81,19 @@ Where:
6881

6982
| Variable | Description |
7083
| --- | --- |
84+
| fsxRegion | The region where the FSx for ONTAP file systems are located. |
7185
| secretArn | The ARN of the secret that contains the credentials for all the FSx for ONTAP file systems you want to gather audit logs from. |
7286
| secretRegion | The region where the secret is stored. |
7387
| s3BucketRegion | The region of the S3 bucket where the stats file is stored. |
7488
| s3BucketName | The name of the S3 bucket where the stats file is stored. |
7589
| statsName | The name you want to use as the stats file. |
7690
| logGroupName | The name of the CloudWatch log group to ingest the audit logs into. |
91+
| volumeName | The name of the volume, on all the FSx for ONTAP file systems, where the audit logs are stored. |
92+
93+
4. Test the Lambda function by clicking on the `Test` tab and then clicking on the `Test` button. You should see "Executing function: succeeded".
94+
If not, click on the "Details" button to see what errors there are.
7795

78-
4. After you have tested that the Ladmba function is running correctly, add an EventBridge trigger to have it run periodically.
96+
5. After you have tested that the Ladmba function is running correctly, add an EventBridge trigger to have it run periodically.
7997
You can do this by clicking on the `Add Trigger` button within the AWS console and selecting `EventBridge (CloudWatch Events)`
8098
from the dropdown. You can then configure the schedule to run as often as you want. How often depends on how often you have
8199
set up your FSx for ONTAP file systems to generate audit logs, and how up-to-date you want the CloudWatch logs to be.

Monitoring/ingest_nas_audit_logs_into_cloudwatch/ingest_audit_log.py

100755100644
Lines changed: 13 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@ def processFile(ontapAdminServer, headers, volumeUUID, filePath):
8888
#
8989
# Number of bytes to read for each API call.
9090
blockSize=1024*1024
91-
91+
9292
bytesRead = 0
9393
requestSize = 1 # Set to > 0 to start the loop.
9494
while requestSize > 0:
@@ -116,7 +116,7 @@ def processFile(ontapAdminServer, headers, volumeUUID, filePath):
116116
else:
117117
print(f'API call to {endpoint} failed. HTTP status code: {response.status}.')
118118
break
119-
119+
120120
f.close()
121121
#
122122
# Upload the audit events to CloudWatch.
@@ -135,8 +135,10 @@ def createCWEvent(event):
135135
# Attributes: A verbose list of strings representing the attributes.
136136
# DirHandleID: A string of numbers that I'm not sure what they represent.
137137
# SearchFilter: Always seems to be null.
138-
# SearchPattern: Always seems to be set to "Not Present"
139-
ignoredDataFields = ["ObjectServer", "HandleID", "InformationRequested", "AccessList", "AccessMask", "DesiredAccess", "Attributes", "DirHandleID", "SearchFilter", "SearchPattern"]
138+
# SearchPattern: Always seems to be set to "Not Present".
139+
# SubjectPort: Just the TCP port that the user came in on.
140+
# OldDirHandle and NewDirHandle: Are the UUIDs of the directory. The OldPath and NewPath are human readable.
141+
ignoredDataFields = ["ObjectServer", "HandleID", "InformationRequested", "AccessList", "AccessMask", "DesiredAccess", "Attributes", "DirHandleID", "SearchFilter", "SearchPattern", "SubjectPort", "OldDirHandle", "NewDirHandle"]
140142
#
141143
# Convert the timestamp from the XML file to a timestamp in milliseconds.
142144
# An example format of the time is: 2024-09-22T21:05:27.263864000Z
@@ -180,7 +182,7 @@ def createCWEvent(event):
180182
str += ", InformationSet=Null"
181183
else:
182184
str += f", InformationSet={data['#text']}"
183-
elif data['@Name'] in ['ObjectType', 'WriteOffset', 'WriteCount', 'NewSD', 'OldSD']: # These don't require special handling.
185+
elif data['@Name'] in ['ObjectType', 'WriteOffset', 'WriteCount', 'NewSD', 'OldSD', 'SubjectUserIsLocal', 'OldPath', 'NewPath', 'OldRotateLimit', 'NewRotateLimit', 'OldLogFormat', 'NewLogFormat', 'OldRetentionDuration', 'NewRetentionDuration', 'AuditGuarantee', 'OldDestinationPath', 'NewDestinationPath']: # These don't require special handling.
184186
str += f", {data['@Name']}={data['#text']}"
185187
else:
186188
print(f"Unknown data type: {data['@Name']}")
@@ -201,7 +203,7 @@ def ingestAuditFile(auditLogPath, auditLogName):
201203

202204
if dict.get('Events') == None or dict['Events'].get('Event') == None:
203205
print(f"No events found in {auditLogName}")
204-
return
206+
return
205207
#
206208
# Ensure the logstream exists.
207209
try:
@@ -243,7 +245,8 @@ def checkConfig():
243245
'secretArn': secretArn if 'secretArn' in globals() else None,
244246
's3BucketRegion': s3BucketRegion if 's3BucketRegion' in globals() else None,
245247
's3BucketName': s3BucketName if 's3BucketName' in globals() else None,
246-
'statsName': statsName if 'statsName' in globals() else None
248+
'statsName': statsName if 'statsName' in globals() else None,
249+
'vserverName': vserverName if 'vserverName' in globals() else None
247250
}
248251

249252
for item in config:
@@ -315,7 +318,7 @@ def lambda_handler(event, context):
315318
lastFileReadChanged = False
316319
#
317320
# Process each FSxN.
318-
for fsxn in fsxNs:
321+
for fsxn in fsxNs:
319322
fsId = fsxn.split('.')[1]
320323
#
321324
# Get the password
@@ -331,7 +334,7 @@ def lambda_handler(event, context):
331334
#
332335
# Get the volume UUID for the audit_logs volume.
333336
volumeUUID = None
334-
endpoint = f"https://{fsxn}/api/storage/volumes?name={config['volumeName']}"
337+
endpoint = f"https://{fsxn}/api/storage/volumes?name={config['volumeName']}&svm={config['vserverName']}"
335338
response = http.request('GET', endpoint, headers=headersQuery, timeout=5.0)
336339
if response.status == 200:
337340
data = json.loads(response.data.decode('utf-8'))
@@ -343,8 +346,7 @@ def lambda_handler(event, context):
343346
continue
344347
#
345348
# Get all the files in the volume that match the audit file pattern.
346-
# Since the vserver is part of the filename, it assumes the vserver is 'fsx'.
347-
endpoint = f'https://{fsxn}/api/storage/volumes/{volumeUUID}/files?name=audit_fsx_D*&order_by=name%20asc&fields=name'
349+
endpoint = f'https://{fsxn}/api/storage/volumes/{volumeUUID}/files?name=audit_{config['vserverName']}_D*&order_by=name%20asc&fields=name'
348350
response = http.request('GET', endpoint, headers=headersQuery, timeout=5.0)
349351
data = json.loads(response.data.decode('utf-8'))
350352
for file in data['records']:

0 commit comments

Comments
 (0)