You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: modules/cluster-logging-exported-fields-top-level-fields.adoc
+11Lines changed: 11 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,6 +9,7 @@
9
9
10
10
The top level fields may be present in every record.
11
11
12
+
[discrete]
12
13
== message
13
14
14
15
The original log entry text, UTF-8 encoded. This field may be absent or empty if a non-empty `structured` field is present. See the description of `structured` for more.
@@ -17,6 +18,7 @@ The original log entry text, UTF-8 encoded. This field may be absent or empty if
17
18
Data type:: text
18
19
Example value:: `HAPPY`
19
20
21
+
[discrete]
20
22
== structured
21
23
22
24
Original log entry as a structured object. This field may be present if the forwarder was configured to parse structured JSON logs. If the original log entry was a valid structured log, this field will contain an equivalent JSON structure. Otherwise this field will be empty or absent, and the `message` field will contain the original log message. The `structured` field can have any subfields that are included in the log message, there are no restrictions defined here.
@@ -25,6 +27,7 @@ Original log entry as a structured object. This field may be present if the forw
A UTC value that marks when the log payload was created or, if the creation time is not known, when the log payload was first collected. The “@” prefix denotes a field that is reserved for a particular use. By default, most tools look for “@timestamp” with ElasticSearch.
@@ -33,27 +36,31 @@ A UTC value that marks when the log payload was created or, if the creation time
33
36
Data type:: date
34
37
Example value:: `2015-01-24 14:06:05.071000000 Z`
35
38
39
+
[discrete]
36
40
== hostname
37
41
38
42
The name of the host where this log message originated. In a Kubernetes cluster, this is the same as `kubernetes.host`.
39
43
40
44
[horizontal]
41
45
Data type:: keyword
42
46
47
+
[discrete]
43
48
== ipaddr4
44
49
45
50
The IPv4 address of the source server. Can be an array.
46
51
47
52
[horizontal]
48
53
Data type:: ip
49
54
55
+
[discrete]
50
56
== ipaddr6
51
57
52
58
The IPv6 address of the source server, if available. Can be an array.
53
59
54
60
[horizontal]
55
61
Data type:: ip
56
62
63
+
[discrete]
57
64
== level
58
65
59
66
The logging level from various sources, including `rsyslog(severitytext property)`, a Python logging module, and others.
@@ -80,13 +87,15 @@ Map the log levels or priorities of other logging systems to their nearest match
80
87
Data type:: keyword
81
88
Example value:: `info`
82
89
90
+
[discrete]
83
91
== pid
84
92
85
93
The process ID of the logging entity, if available.
86
94
87
95
[horizontal]
88
96
Data type:: keyword
89
97
98
+
[discrete]
90
99
== service
91
100
92
101
The name of the service associated with the logging entity, if available. For example, syslog's `APP-NAME` and rsyslog's `programname` properties are mapped to the service field.
@@ -101,13 +110,15 @@ Optional. An operator-defined list of tags placed on each log by the collector o
101
110
[horizontal]
102
111
Data type:: text
103
112
113
+
[discrete]
104
114
== file
105
115
106
116
The path to the log file from which the collector reads this log entry. Normally, this is a path in the `/var/log` file system of a cluster node.
107
117
108
118
[horizontal]
109
119
Data type:: text
110
120
121
+
[discrete]
111
122
== offset
112
123
113
124
The offset value. Can represent bytes to the start of the log line in the file (zero- or one-based), or log line numbers (zero- or one-based), so long as the values are strictly monotonically increasing in the context of a single log file. The values are allowed to wrap, representing a new version of the log file (rotation).
0 commit comments