-
Question DescriptionThe logs of different services are written to different topics, so when writing logs to kafka in the sinks stage, the value of the kafka topic field obtained in the transform stage must be used to determine the topic information written to kafka. However, after reading the docs and trying to do in some ways, the result did not meet expectations. So ask it here. Some try
My Config of values.yamlcustomConfig:
data_dir: /data/logs
api:
enabled: true
address: 127.0.0.1:8686
playground: false
# 日志来源
sources:
k8s_log:
type: kubernetes_logs
data_dir: "/data/logs/" # The directory used to persist file checkpoint positions. 存储checkpoint位置的目录,所以指定收集目录是什么?
extra_label_selector: "app=hello-admin-clone-stag"
# 日志转换
transforms:
trans_log:
type: remap
inputs:
- k8s_log
source: |-
.node_name = del(.kubernetes.pod_node_name)
.pod_name = del(.kubernetes.pod_name)
.swimlane = del(.kubernetes.pod_labels.swimlane)
.owt = del(.kubernetes.pod_labels.owt)
.service = del(.kubernetes.pod_labels.service)
.cluster = del(.kubernetes.pod_labels.cluster)
.k8s_topic = del(.kubernetes.pod_labels.kafkaTopic)
.version = del(.kubernetes.pod_annotations.version)
. |= parse_regex!(.file, r'/.*?/.*?/.*?/.*?/(?P<log_path>.*)$')
del(.file)
del(.kubernetes)
.yx_label = "vector"
# 日志输出
sinks:
stdout: # 自定义名称
type: console
inputs: [trans_log]
encoding:
codec: json
kafka_sink:
type: kafka
inputs:
- trans_log
bootstrap_servers: 127.0.0.1:9092
encoding:
codec: json
topic: "{{ .k8s_topic }}"
compression: gzip FinalWe look forward to your discussion. Thanks~ |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Helm also uses moustache-template syntax for its values. Any usage of them within the embedded Vector config, especially YAML format, must be escaped. e.g.
|
Beta Was this translation helpful? Give feedback.
Helm also uses moustache-template syntax for its values. Any usage of them within the embedded Vector config, especially YAML format, must be escaped. e.g.