Parsing Ingress-Nginx Logs to Elasticsearch with Individual Key-Value Pairs for Enhanced Kibana Filtering #4364
Unanswered
cherrycharan
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I am encountering challenges in effectively filtering Ingress-Nginx logs in Kibana. Currently, these logs are sent as a JSON array to Elasticsearch, making individual key filtering difficult. My goal is to parse these logs such that each field in the log is treated as a separate key-value pair, facilitating more efficient filtering in Kibana.
Current Configuration:
Ingress-Nginx-Controller Log Format:
I have configured the log format of the Ingress-Nginx-Controller as follows:
{ "@timestamp": "$time_local", "client": "$remote_addr", "method":"$request_method", "URL": "$host", "request": "$request", "request_id": "$req_id", "request_length": "$request_length", "bytes_sent": "$bytes_sent", "status": "$status", "body_bytes_sent": "$body_bytes_sent", "referer": "$http_referer", "user_agent": "$http_user_agent", "upstream_addr": "$upstream_addr", "upstream_status": "$upstream_status", "request_time": "$request_time", "upstream_response_time": "$upstream_response_time", "upstream_connect_time": "$upstream_connect_time", "upstream_header_time": "$upstream_header_time" }
Fluentd Configuration:
My Fluentd configuration is set with the following source and destination parameters:
Issue:
Despite this configuration, I am unable to parse each log field into separate key-value pairs for Elasticsearch. This is limiting my ability to filter logs effectively in Kibana.
Request for Assistance:
I am seeking guidance or suggestions on how to modify my configuration to achieve the desired log parsing and filtering capabilities. Any insights or recommendations from the community would be greatly appreciated.
Beta Was this translation helpful? Give feedback.
All reactions