You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: eks_fargate/README.md
+41-28Lines changed: 41 additions & 28 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -885,10 +885,10 @@ spec:
885
885
886
886
Monitor EKS Fargate logs using the Datadog Agent to collect logs from the kubelet and ship them to Datadog.
887
887
888
-
1. The most convenient way to enable native kubelet logging is through the Cluster Agent's Admission Controller sidecar injection feature. When configured, all subsequent injected Agent containers automatically have kubelet logging enabled. This feature can also be configured manually in your Application's manifest.
888
+
1. The most convenient way to enable native kubelet logging is through the Cluster Agent's Admission Controller sidecar injection feature. When configured, all subsequent injected Agent containers automatically have kubelet logging enabled. This requires Cluster Agent `7.68.0` or above. This feature can also be configured manually in your Application's manifest.
Set the `DD_ADMISSION_CONTROLLER_AGENT_SIDECAR_KUBELET_API_LOGGING_ENABLED` Cluster Agent environment variable to `true`, so newly injected Agent containers will have kubelet logging enabled.
894
894
@@ -899,27 +899,36 @@ Monitor EKS Fargate logs using the Datadog Agent to collect logs from the kubele
Set the `DD_ADMISSION_CONTROLLER_AGENT_SIDECAR_KUBELET_API_LOGGING_ENABLED` Cluster Agent environment variable to `true`, so newly injected Agent containers will have kubelet logging enabled.
1. Attach an [emptyDir][29] volume to your pod and mount it inside the Agent container. This prevents duplicate logs should the Agent container restart.
@@ -940,7 +949,10 @@ Monitor EKS Fargate logs using the Datadog Agent to collect logs from the kubele
940
949
- name: agent-option
941
950
emptyDir: {}
942
951
containers:
943
-
#(...)
952
+
# Your original container
953
+
- name: "<CONTAINER_NAME>"
954
+
image: "<CONTAINER_IMAGE>"
955
+
944
956
# Running the Agent as a sidecar
945
957
- name: datadog-agent
946
958
image: gcr.io/datadoghq/agent:7
@@ -969,10 +981,10 @@ Monitor EKS Fargate logs using the Datadog Agent to collect logs from the kubele
969
981
<!-- xxz tab xxx -->
970
982
<!-- xxz tabs xxx -->
971
983
972
-
2. You can configure the injected Agents to automatically collect logs for all containers by enabling `CONTAINER_COLLECT_ALL`. Alternatively, logs can be filtered through the standard Kubernetes [Autodiscovery annotations](https://docs.datadoghq.com/containers/kubernetes/log/?tab=helm#autodiscovery-annotations).
984
+
2. You can configure the Agent sidecar to automatically collect logs for all of the containers in its pod by enabling `DD_LOGS_CONFIG_CONTAINER_COLLECT_ALL`. Alternatively, the log integration can be setup per container with the standard Kubernetes [Autodiscovery annotations][30].
@@ -1005,7 +1017,7 @@ Monitor EKS Fargate logs using the Datadog Agent to collect logs from the kubele
1005
1017
```
1006
1018
1007
1019
<!-- xxz tab xxx -->
1008
-
<!-- xxx tab "Configure Logging - Manual" xxx -->
1020
+
<!-- xxx tab "Manual" xxx -->
1009
1021
1010
1022
```yaml
1011
1023
apiVersion: apps/v1
@@ -1028,7 +1040,7 @@ Monitor EKS Fargate logs using the Datadog Agent to collect logs from the kubele
1028
1040
- name: DD_LOGS_CONFIG_CONTAINER_COLLECT_ALL
1029
1041
value: "true"
1030
1042
#(...)
1031
-
```
1043
+
```
1032
1044
1033
1045
<!-- xxz tab xxx -->
1034
1046
<!-- xxz tabs xxx -->
@@ -1039,22 +1051,22 @@ Monitor EKS Fargate logs by using [Fluent Bit][14] to route EKS logs to CloudWat
1039
1051
1040
1052
1. To configure Fluent Bit to send logs to CloudWatch, create a Kubernetes ConfigMap that specifies CloudWatch Logs as its output. The ConfigMap specifies the log group, region, prefix string, and whether to automatically create the log group.
1041
1053
1042
-
```yaml
1043
-
kind: ConfigMap
1044
-
apiVersion: v1
1045
-
metadata:
1046
-
name: aws-logging
1047
-
namespace: aws-observability
1048
-
data:
1049
-
output.conf: |
1050
-
[OUTPUT]
1051
-
Name cloudwatch_logs
1052
-
Match *
1053
-
region us-east-1
1054
-
log_group_name awslogs-https
1055
-
log_stream_prefix awslogs-firelens-example
1056
-
auto_create_group true
1057
-
```
1054
+
```yaml
1055
+
kind: ConfigMap
1056
+
apiVersion: v1
1057
+
metadata:
1058
+
name: aws-logging
1059
+
namespace: aws-observability
1060
+
data:
1061
+
output.conf: |
1062
+
[OUTPUT]
1063
+
Name cloudwatch_logs
1064
+
Match *
1065
+
region us-east-1
1066
+
log_group_name awslogs-https
1067
+
log_stream_prefix awslogs-firelens-example
1068
+
auto_create_group true
1069
+
```
1058
1070
2. Use the [Datadog Forwarder][15] to collect logs from CloudWatch and send them to Datadog.
1059
1071
1060
1072
## Trace collection
@@ -1258,4 +1270,5 @@ Additional helpful documentation, links, and articles:
0 commit comments