Skip to content

Commit d635e83

Browse files
committed
updating README
1 parent adcbbd3 commit d635e83

File tree

1 file changed

+61
-57
lines changed

1 file changed

+61
-57
lines changed

README.md

Lines changed: 61 additions & 57 deletions
Original file line numberDiff line numberDiff line change
@@ -14,18 +14,18 @@ The only prerequisite for this solution is the proper access to run the CloudFor
1414

1515
## Deployment
1616

17-
### Step 1. CloudFormation
17+
### Step 1. CloudFormation
1818

1919
1. Create a deployment bucket where you'll put the assets (lambda function and layer zip files) needed for the CloudFormation Deployment
2020
2. Create an export bucket where inventory exports will go when requested by users. Users will need access this bucket to pick up their exports
2121
3. Go to Secrets Manager in the console
22-
- Click on Store a new secret
23-
- Under secret type select Other type of secret
24-
- Below for key/value put in `username` for the key and any username value you’d like (e.g. admin)
25-
- Click Add row and put in another key/value pair with `password` as the key and whatever password you’d like for the value (must be at least 12 characters long with at least one capital, number, and symbol)
26-
- Click Next
27-
- For the secret name enter `s3auditor-opensearch-info` (this is required)
28-
- Click Next, click Next again, and then click Store
22+
- Click on Store a new secret
23+
- Under secret type select Other type of secret
24+
- Below for key/value put in `username` for the key and any username value you’d like (e.g. admin)
25+
- Click Add row and put in another key/value pair with `password` as the key and whatever password you’d like for the value (must be at least 12 characters long with at least one capital, number, and symbol)
26+
- Click Next
27+
- For the secret name enter `s3auditor-opensearch-info` (this is required)
28+
- Click Next, click Next again, and then click Store
2929
4. Run the CloudFormation template provided here to set up all the necessary resources needed for this solution. This step can take approximately 15 minutes or more.
3030
5. Upon success, save the Role ARN, API Gateway endpoint, and OpenSearch Dashboard URL outputs
3131

@@ -34,26 +34,25 @@ The only prerequisite for this solution is the proper access to run the CloudFor
3434
1. Go to the OpenSearch Dashboard URL and login with the credentials you saved in Secrets Manager
3535
2. In the top left, click on the menu icon and go to Security
3636
3. Select Roles and find the `all_access and security_manager` roles
37-
- In both of these roles, go to Mapped Users tab
38-
- Click on Manage Mapping
39-
- Add the ARN for the Role from the CloudFormation Outputs tab into the Backend roles and click Map
37+
- In both of these roles, go to Mapped Users tab
38+
- Click on Manage Mapping
39+
- Add the ARN for the Role from the CloudFormation Outputs tab into the Backend roles and click Map
4040
4. In the AWS Console, navigate to Lambda Functions and click on `s3auditor-create-os-tables-run-once`
41-
- Go to lambda_function.py so you can see the code of the function
42-
- Press Test above and create a new test event. The name can be anything you choose
43-
- Press Test again to execute the function to create the necessary OpenSearch indexes
41+
- Go to lambda_function.py so you can see the code of the function
42+
- Press Test above and create a new test event. The name can be anything you choose
43+
- Press Test again to execute the function to create the necessary OpenSearch indexes
4444

4545
### Step 3. Front-end Setup
4646

4747
This step is only required if you'd like to use the provided React front-end to view the information stored in OpenSearch.
4848

4949
1. Unzip all the React code to a directory
5050
2. Update REACT_APP_API_GATEWAY_URL with the API Gateway output from CloudFormation
51-
- You will need to copy source/frontend/.env.template to source/frontend/.env
51+
- You will need to copy source/frontend/.env.template to source/frontend/.env
5252
3. Get the latest modules by running `npm install`
5353
4. Build the application via `npm run build`
5454
5. Download the React build code from the build directory and set it up as a static site in an S3 bucket or any hosting location of your choice
55-
- If you want to run it locally on your machine, run `npm start`
56-
55+
- If you want to run it locally on your machine, run `npm start`
5756

5857
## Adding a Bucket for S3 Auditor to Track
5958

@@ -63,34 +62,34 @@ If this is the FIRST bucket in this AWS account to get set up for the S3 auditor
6362

6463
1. Go to Amazon EventBridge and go to Rules (make sure you're in the same region as your bucket)
6564
2. Create a new Rule:
66-
- Give it any name and description
67-
- Keep the default event bus unless you’d like to set up a more custom workflow
68-
- For Rule Type, make sure “Rule with an event pattern” is selected for Rule Type
69-
- Click Next
70-
- For Event Source, leave it on “AWS events or EventBridge partner events”
71-
- Under Event Pattern > AWS Service, select Simple Storage Service (S3)
72-
- For Event Type, select Amazon S3 Event Notification
73-
- Select the following events for S3 Event Notifications:
74-
- Object Access Tier Changed
75-
- Object Created
76-
- Object Deleted
77-
- Object Restore Completed
78-
- Object Storage Class Changed
79-
- Object Tags Added
80-
- Object Tags Deleted
81-
- You can leave Any bucket selected or specify your bucket name
82-
- Click Next
83-
- For the Target, select EventBridge event bus
84-
- Select “Event bus in a different account or Region”
85-
- Add the ARN of the Event bus from your auditor account
86-
- You can either create a new role or use an existing one if you have one already
87-
- Click Next
88-
- Add any tags you’d like
89-
- Click Next and review the setup
90-
- Click Finish
91-
- In the S3 Auditor account, go to the `default` EventBridge bus and make sure your account is added as a PutEvents source
92-
- Go to the EventBridge event bus you specified
93-
- Edit the PutEvents permissions to allow for your sender account to send events
65+
- Give it any name and description
66+
- Keep the default event bus unless you’d like to set up a more custom workflow
67+
- For Rule Type, make sure “Rule with an event pattern” is selected for Rule Type
68+
- Click Next
69+
- For Event Source, leave it on “AWS events or EventBridge partner events”
70+
- Under Event Pattern > AWS Service, select Simple Storage Service (S3)
71+
- For Event Type, select Amazon S3 Event Notification
72+
- Select the following events for S3 Event Notifications:
73+
- Object Access Tier Changed
74+
- Object Created
75+
- Object Deleted
76+
- Object Restore Completed
77+
- Object Storage Class Changed
78+
- Object Tags Added
79+
- Object Tags Deleted
80+
- You can leave Any bucket selected or specify your bucket name
81+
- Click Next
82+
- For the Target, select EventBridge event bus
83+
- Select “Event bus in a different account or Region”
84+
- Add the ARN of the Event bus from your auditor account
85+
- You can either create a new role or use an existing one if you have one already
86+
- Click Next
87+
- Add any tags you’d like
88+
- Click Next and review the setup
89+
- Click Finish
90+
- In the S3 Auditor account, go to the `default` EventBridge bus and make sure your account is added as a PutEvents source
91+
- Go to the EventBridge event bus you specified
92+
- Edit the PutEvents permissions to allow for your sender account to send events
9493

9594
```
9695
{
@@ -126,7 +125,7 @@ If this is the FIRST bucket in this AWS account to get set up for the S3 auditor
126125
2. Go to the bucket you'd like to add and click on Properties tab
127126
3. Under Event Notifications, click Edit next to Amazon EventBridge and set it to On
128127
4. In the AWS Account where your S3 Auditor is set up:
129-
- Go to IAM > s3auditorLambdaRole and add the following inline policy to an existing inline policy or add a new one:
128+
- Go to IAM > s3auditorLambdaRole and add the following inline policy to an existing inline policy or add a new one:
130129

131130
```
132131
{
@@ -148,7 +147,7 @@ This optional step should only be done if you would like to set up GET request l
148147

149148
1. Create a bucket where the server access logs from the bucket above will go
150149
2. It is required that this bucket start with “s3auditorlog-“ in the name
151-
- For example: s3auditorlog-server-access-logs-for-my-bucket
150+
- For example: s3auditorlog-server-access-logs-for-my-bucket
152151
3. In the Permissions tab of this bucket, add the following statement to your policy:
153152

154153
```
@@ -166,7 +165,7 @@ This optional step should only be done if you would like to set up GET request l
166165
4. Go to the bucket you'd like to add and click on Properties tab
167166
5. Under Event Notifications, click Edit next to Amazon EventBridge and set it to On
168167
6. In the AWS Account where your S3 Auditor is set up:
169-
- Go to IAM > s3auditorLambdaRole and add the following inline policy to an existing inline policy or add a new one:
168+
- Go to IAM > s3auditorLambdaRole and add the following inline policy to an existing inline policy or add a new one:
170169

171170
```
172171
{
@@ -187,24 +186,29 @@ This optional step should only be done if you would like to set up GET request l
187186
9. Click “Browse S3” and select the logging bucket you just created above and click “Choose Path”
188187
10. Then Click “Save Changes”
189188

190-
191189
## Importing Data from S3 Inventory File
192190

193191
If you have an existing bucket you would like to add to the S3 Auditor, you can run the process below to import existing meta-data from an S3 inventory file.
194192

195193
1. Go through the “Adding a Bucket for S3 Auditor to Track” steps above to add your bucket to the S3 Auditor
196194
2. Generate an S3 inventory file for the bucket
197195
3. On an EC2 instance or a local machine that has access to the AWS account where the S3 Auditor is running, and the necessary permissions to access SQS run the following steps:
198-
- run s3_inventory_file_import.py `region` `account` `inventory_file_name` `queue_name` `log_filename`
199-
- `region` - your region (e.g. us-east-1)
200-
- `account` - your AWS account number
201-
- `inventory_filename` - the name of your S3 inventory file you downloaded from S3
202-
- `queue_name` - the SQS queue name created by CloudFormation (e.g. s3auditor-object-activity)
203-
- `log_filename` - log filename for logging the activity from the import process
204-
- This will run until all the objects from the inventory file are added to the queue to be processed by S3 Auditor
196+
- run s3_inventory_file_import.py `region` `account` `inventory_file_name` `queue_name` `log_filename`
197+
- `region` - your region (e.g. us-east-1)
198+
- `account` - your AWS account number
199+
- `inventory_filename` - the name of your S3 inventory file you downloaded from S3
200+
- `queue_name` - the SQS queue name created by CloudFormation (e.g. s3auditor-object-activity)
201+
- `log_filename` - log filename for logging the activity from the import process
202+
- This will run until all the objects from the inventory file are added to the queue to be processed by S3 Auditor
203+
204+
## Running the Guidance
205205

206+
Once the CloudFormation template is complete and you've gone through the bucket and account set up above, put an object into the bucket that's part of the set up. You will shortly be able to see that object meta-data in the React front-end, if that was set up, or a record of that object meta-data in the OpenSearch cluster that's part of this solution.
207+
208+
## Cleanup
209+
210+
When you delete the CloudFormation stack for this solution, all of the components set up will be removed. Any buckets created and being audited by the system will remain untouched but their event notifications because the S3 Auditor EventBridge will have been removed so that would have to repointed. You would also manually have to delete the bucket where the Lambda function zip files are stored in your account that you referenced during the CloudFormation setup.
206211

207212
## Notices (optional)
208213

209214
Customers are responsible for making their own independent assessment of the information in this Guidance. This Guidance: (a) is for informational purposes only, (b) represents AWS current product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS products or services are provided “as is” without warranties, representations, or conditions of any kind, whether express or implied. AWS responsibilities and liabilities to its customers are controlled by AWS agreements, and this Guidance is not part of, nor does it modify, any agreement between AWS and its customers.
210-

0 commit comments

Comments
 (0)