Skip to content

Commit 076dd99

Browse files
committed
added subsections
1 parent 3284b2e commit 076dd99

File tree

1 file changed

+33
-12
lines changed

1 file changed

+33
-12
lines changed

docs/integrations/databases/mongodb-atlas.md

Lines changed: 33 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -321,12 +321,20 @@ In this section, you deploy the SAM application, which creates the necessary res
321321

322322
The lambda function should be working now in sending logs to Sumo. You can check the CloudWatch logs in **Monitor** > **Logs** to see the logs of the function.
323323

324-
13. Configuring collection for multiple projects (assuming you are already collecting Atlas data for one project). This task requires that you do the following:
324+
##### Configure collection for multiple projects
325+
326+
If you are already collecting Atlas data for one project, perform the following steps to configure for additional projects:
325327

326328
1. [Deploy the MongoDB Atlas SAM application](#deploy-the-sumo-logic-mongodb-atlas-sam-application) with the configuration for a new project.
327-
2. From the Lambda console, go to the **mongodbatlas.yaml** file and comment out `EVENTS_ORG`, as shown in the following example. This prevents the collection of `Organisation Events` in the second SAM app deployment, because these events are global and are already captured by first collector.
328329

329-
14. By default the solution collects all log types & metrics for all the clusters, if you to filter based on cluster alias, do the following
330+
1. From the Lambda console, go to the **mongodbatlas.yaml** file and comment out `EVENTS_ORG`, as shown in the following example. This prevents the collection of `Organisation Events` in the second SAM app deployment, because these events are global and are already captured by first collector.
331+
332+
1. After editing the file, Choose **Deploy**. The next lambda invocation will use the new configuration file.
333+
334+
##### Filtering log types and metrics
335+
336+
By default the solution collects all log types & metrics for all the clusters, if you want to filter based on cluster alias and log types, do the following:
337+
330338
1. After the deployment is complete, go to the Lambda console, and open the **mongodbatlas.yaml** file and uncomment `Clusters` parameter under `Collection` section, as shown in the following example. Add your cluster names for which you want to collect logs & metrics. Cluster name should be same as what you have specified during [cluster creation](https://www.mongodb.com/docs/atlas/tutorial/create-new-cluster/#specify-a-name-for-the-cluster-in-the-name-box).
331339

332340
<img src={useBaseUrl('img/integrations/databases/mongodbatlas/changecluster.png')} alt="MongoDB Atlas filter by cluster" />
@@ -335,6 +343,8 @@ The lambda function should be working now in sending logs to Sumo. You can check
335343

336344
<img src={useBaseUrl('img/integrations/databases/mongodbatlas/updatemetricslogs.png')} alt="MongoDB Atlas filter by log and metric type" />
337345

346+
1. After editing the file, Choose **Deploy**. The next lambda invocation will use the new configuration file.
347+
338348
#### Configure Script-Based Collection for MongoDB Atlas
339349

340350
This section shows you how to configure script-based log collection for the Sumo Logic MongoDB Atlas app. The _sumologic-mongodb-atlas_ script is compatible with python 3.11 and python 2.7, and has been tested on Ubuntu 18.04 LTS.
@@ -387,23 +397,34 @@ This task makes the following assumptions:
387397
```bash
388398
*/5 * * * * /usr/bin/python3 -m sumomongodbatlascollector.main > /dev/null 2>&1
389399
```
390-
5. Configuring collection for multiple projects (assuming you are already collecting Atlas data for one project). This task requires that you do the following:
391-
* Create a new **mongodbatlas.yaml** file similar to previous step and comment out `EVENTS_ORG`, as shown in the following example. This prevents the collection of `Organisation Events` in the second collector deployment, because these events are global and are already captured by first collector.
392-
* State is maintained per project, change the `DBNAME` so that state (keys for bookkeeping) maintained in the database (key value store) are not in conflict.
393-
* Configure the script on a Linux machine, then go to your configuration file.
394400

395-
6. By default the solution collects all log types & metrics for all the clusters, if you to filter based on cluster alias, do the following
396-
1. After the deployment is complete, go to the Lambda console, and open the **mongodbatlas.yaml** file and uncomment `Clusters` parameter under `Collection` section, as shown in the following example. Add your cluster names for which you want to collect logs & metrics. Cluster name should be same as what you have specified during [cluster creation](https://www.mongodb.com/docs/atlas/tutorial/create-new-cluster/#specify-a-name-for-the-cluster-in-the-name-box).
401+
##### Configure collection for multiple projects
402+
403+
If you are already collecting Atlas data for one project, perform the following steps to configure for additional projects:
404+
405+
1. Create a new **mongodbatlas.yaml** file similar to previous step and comment out `EVENTS_ORG`, as shown in the following example. This prevents the collection of `Organisation Events` in the second collector deployment, because these events are global and are already captured by first collector.
406+
407+
1. State is maintained per project, change the `DBNAME` so that state (keys for bookkeeping) maintained in the database (key value store) are not in conflict.
408+
409+
1. Configure the script on a Linux machine (or use the same machine), and run it using the new configuration file.
410+
411+
```bash title="Example execution of second yaml file"
412+
/usr/bin/python3 -m sumomongodbatlascollector.main <path-of-second-yaml-file>
413+
```
414+
415+
##### Filtering log types and metrics
416+
417+
By default the solution collects all log types & metrics for all the clusters, if you want to filter based on cluster alias and log types, do the following:
418+
419+
1. Open the **mongodbatlas.yaml** file and uncomment `Clusters` parameter under `Collection` section, as shown in the following example. Add your cluster names for which you want to collect logs & metrics. Cluster name should be same as what you have specified during [cluster creation](https://www.mongodb.com/docs/atlas/tutorial/create-new-cluster/#specify-a-name-for-the-cluster-in-the-name-box).
397420

398421
<img src={useBaseUrl('img/integrations/databases/mongodbatlas/changecluster.png')} alt="MongoDB Atlas filter by cluster" />
399422

400423
1. By default the solution collects logs types and metrics which are used in the app, if you want to collect specific log types and metric types uncomment to collect the respective log type or metric name as shown below.
401424

402425
<img src={useBaseUrl('img/integrations/databases/mongodbatlas/updatemetricslogs.png')} alt="MongoDB Atlas filter by log and metric type" />
403426

404-
```sh title="Example execution of second yaml file"
405-
/usr/bin/python3 -m sumomongodbatlascollector.main <path-of-second-yaml-file>
406-
```
427+
1. After saving the changes in your file, the next invocation (as per cron job schedule) will use the new configuration file.
407428

408429

409430
### Step 4: Configure Webhooks for Alerts Collection

0 commit comments

Comments
 (0)