Replies: 3 comments 4 replies
-
Would you be able to share an event that resulted in this failure? |
Beta Was this translation helpful? Give feedback.
-
I have also experienced issues with this
From my understanding, From what I have observed and after double checking the event.created ECS documentation, this field is usually the time when the event was read by an agent or the pipeline, and not when the event was originally created on the system. I was testing this by ingesting some logs into SO 2.4.150 from a Windows endpoint and looking at log-in events. All the documents had a When I removed that processor and re-ingested the logs, I saw the expected timestamps ( Image A: Ingested with processor: Image B: Ingested without processor: This issue would not be very obvious for new logs on a system being ingested into Elastic, but when you initially ingest documents and look for historic events, it becomes apparent. |
Beta Was this translation helpful? Give feedback.
-
Created #14693 |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Version
2.4.150
Installation Method
Security Onion ISO image
Description
other (please provide detail below)
Installation Type
Distributed
Location
airgap
Hardware Specs
Exceeds minimum requirements
CPU
24
RAM
64GB
Storage for /
300GB
Storage for /nsm
700GB
Network Traffic Collection
span port
Network Traffic Speeds
1Gbps to 10Gbps
Status
Yes, all services on all nodes are running OK
Salt Status
No, there are no failures
Logs
No, there are no additional clues
Detail
Hello, my team is having an issue with data ingestion while using Elastic Agent and the System integration. The target endpoints are largely Windows 10. The issue is that the "@timestamp" field is getting overwritten with the time the event was ingested. This makes it hard to search the data. In the past (SO v2.4.10), this field contained the timestamp when the event occurred on the host. Before, we would deploy an agent to a host and historical events would be placed into the index with an accurate timeline. Now, months of historical events all get timestomped and appear to have occurred in about a ten-minute window after the agent installation.
We're not clear why this is happening, but our research indicates a fallback timestamp processor in the global@custom ingest pipeline (which copies the event.created timestamp to @timestamp) is always running. The logs-system.security-* pipelines which come with the integration contain a processor to write the "winlog.time_created" field to @timestamp, but that is not successful for some reason. The "winlog.time_created" field does seem to be in the data stream mapping.
A potential workaround is to delete this processor from the global@custom pipeline, but this could have unintended consequences, and we'd prefer an actual fix.
{ "date": { "if": "ctx.event?.module == 'system'", "field": "event.created", "target_field": "@timestamp","ignore_failure": true, "formats": ["yyyy-MM-dd'T'HH:mm:ss.SSSX","yyyy-MM-dd'T'HH:mm:ss.SSSSSS'Z'"] } }
Integration/SecOnion versions where we can replicate the issue:
System integration where it works: 1.34.0
System integration versions where it doesn't work as expected: 1.59.0, 1.62.1, and 2.0.1
SO version that works: 2.4.10
SO versions that don't work as expected: 2.4.110, 2.4.141, and 2.4.150
If anyone has some knowledge about why this behavior changed, or a stable fix, we would be very happy to hear about it :)
Guidelines
Beta Was this translation helpful? Give feedback.
All reactions