You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/integrations/databases/mongodb-atlas.md
+33-12Lines changed: 33 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -321,12 +321,20 @@ In this section, you deploy the SAM application, which creates the necessary res
321
321
322
322
The lambda function should be working now in sending logs to Sumo. You can check the CloudWatch logs in **Monitor** > **Logs** to see the logs of the function.
323
323
324
-
13. Configuring collection for multiple projects (assuming you are already collecting Atlas data for one project). This task requires that you do the following:
324
+
##### Configure collection for multiple projects
325
+
326
+
If you are already collecting Atlas data for one project, perform the following steps to configure for additional projects:
325
327
326
328
1.[Deploy the MongoDB Atlas SAM application](#deploy-the-sumo-logic-mongodb-atlas-sam-application) with the configuration for a new project.
327
-
2. From the Lambda console, go to the **mongodbatlas.yaml** file and comment out `EVENTS_ORG`, as shown in the following example. This prevents the collection of `Organisation Events` in the second SAM app deployment, because these events are global and are already captured by first collector.
328
329
329
-
14. By default the solution collects all log types & metrics for all the clusters, if you to filter based on cluster alias, do the following
330
+
1. From the Lambda console, go to the **mongodbatlas.yaml** file and comment out `EVENTS_ORG`, as shown in the following example. This prevents the collection of `Organisation Events` in the second SAM app deployment, because these events are global and are already captured by first collector.
331
+
332
+
1. After editing the file, Choose **Deploy**. The next lambda invocation will use the new configuration file.
333
+
334
+
##### Filtering log types and metrics
335
+
336
+
By default the solution collects all log types & metrics for all the clusters, if you want to filter based on cluster alias and log types, do the following:
337
+
330
338
1. After the deployment is complete, go to the Lambda console, and open the **mongodbatlas.yaml** file and uncomment `Clusters` parameter under `Collection` section, as shown in the following example. Add your cluster names for which you want to collect logs & metrics. Cluster name should be same as what you have specified during [cluster creation](https://www.mongodb.com/docs/atlas/tutorial/create-new-cluster/#specify-a-name-for-the-cluster-in-the-name-box).
331
339
332
340
<img src={useBaseUrl('img/integrations/databases/mongodbatlas/changecluster.png')} alt="MongoDB Atlas filter by cluster" />
@@ -335,6 +343,8 @@ The lambda function should be working now in sending logs to Sumo. You can check
335
343
336
344
<img src={useBaseUrl('img/integrations/databases/mongodbatlas/updatemetricslogs.png')} alt="MongoDB Atlas filter by log and metric type" />
337
345
346
+
1. After editing the file, Choose **Deploy**. The next lambda invocation will use the new configuration file.
347
+
338
348
#### Configure Script-Based Collection for MongoDB Atlas
339
349
340
350
This section shows you how to configure script-based log collection for the Sumo Logic MongoDB Atlas app. The _sumologic-mongodb-atlas_ script is compatible with python 3.11 and python 2.7, and has been tested on Ubuntu 18.04 LTS.
@@ -387,23 +397,34 @@ This task makes the following assumptions:
5. Configuring collection for multiple projects (assuming you are already collecting Atlas data for one project). This task requires that you do the following:
391
-
* Create a new **mongodbatlas.yaml** file similar to previous step and comment out `EVENTS_ORG`, as shown in the following example. This prevents the collection of `Organisation Events`in the second collector deployment, because these events are global and are already captured by first collector.
392
-
* State is maintained per project, change the `DBNAME` so that state (keys forbookkeeping) maintainedin the database (key value store) are not in conflict.
393
-
* Configure the script on a Linux machine, then go to your configuration file.
394
400
395
-
6. By default the solution collects all log types & metrics for all the clusters, if you to filter based on cluster alias, do the following
396
-
1. After the deployment is complete, go to the Lambda console, and open the **mongodbatlas.yaml** file and uncomment `Clusters` parameter under `Collection` section, as shown in the following example. Add your cluster names for which you want to collect logs & metrics. Cluster name should be same as what you have specified during [cluster creation](https://www.mongodb.com/docs/atlas/tutorial/create-new-cluster/#specify-a-name-for-the-cluster-in-the-name-box).
401
+
##### Configure collection for multiple projects
402
+
403
+
If you are already collecting Atlas data for one project, perform the following steps to configure for additional projects:
404
+
405
+
1. Create a new **mongodbatlas.yaml** file similar to previous step and comment out `EVENTS_ORG`, as shown in the following example. This prevents the collection of `Organisation Events`in the second collector deployment, because these events are global and are already captured by first collector.
406
+
407
+
1. State is maintained per project, change the `DBNAME` so that state (keys forbookkeeping) maintainedin the database (key value store) are not in conflict.
408
+
409
+
1. Configure the script on a Linux machine (or use the same machine), and run it using the new configuration file.
410
+
411
+
```bash title="Example execution of second yaml file"
By default the solution collects all log types & metrics for all the clusters, if you want to filter based on cluster alias and log types, do the following:
418
+
419
+
1. Open the **mongodbatlas.yaml** file and uncomment `Clusters` parameter under `Collection` section, as shown in the following example. Add your cluster names for which you want to collect logs & metrics. Cluster name should be same as what you have specified during [cluster creation](https://www.mongodb.com/docs/atlas/tutorial/create-new-cluster/#specify-a-name-for-the-cluster-in-the-name-box).
397
420
398
421
<img src={useBaseUrl('img/integrations/databases/mongodbatlas/changecluster.png')} alt="MongoDB Atlas filter by cluster" />
399
422
400
423
1. By default the solution collects logs types and metrics which are used in the app, if you want to collect specific log types and metric types uncomment to collect the respective log type or metric name as shown below.
401
424
402
425
<img src={useBaseUrl('img/integrations/databases/mongodbatlas/updatemetricslogs.png')} alt="MongoDB Atlas filter by log and metric type" />
403
426
404
-
```sh title="Example execution of second yaml file"
0 commit comments