-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Updating logs troubleshooting documentation in line with Customer Signals #33669
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Preview links (active after the
|
estherk15
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the update, let me know if you have any questions on my suggestions!
| If you are unable to access your Restriction Queries in Datadog, please contact your Datadog Administrator to verify if your role is affected. | ||
|
|
||
| See [Check Restrictions Queries][4] for more information on configuring Logs RBAC data access controls. | ||
|
|
||
| Furthermore, Legacy Permissions can also affect the ability to see Logs, particularly in the [Log Explorer][2] . You may find yourself unable to view logs from certain indexes, or only one index at a time. See [Legacy Permissions][10] for more information on how these can be applied to your role and organisation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| If you are unable to access your Restriction Queries in Datadog, please contact your Datadog Administrator to verify if your role is affected. | |
| See [Check Restrictions Queries][4] for more information on configuring Logs RBAC data access controls. | |
| Furthermore, Legacy Permissions can also affect the ability to see Logs, particularly in the [Log Explorer][2] . You may find yourself unable to view logs from certain indexes, or only one index at a time. See [Legacy Permissions][10] for more information on how these can be applied to your role and organisation. | |
| If you are unable to access your Restriction Queries in Datadog, contact your Datadog Administrator to verify if your role is affected. | |
| See [Check Restrictions Queries][4] for more information on configuring Logs RBAC data access controls. | |
| **Legacy Permissions** can also restrict access to Logs, particularly in the [Log Explorer][2]. Depending on configuration, access may be limited to specific indexes or to a single index at a time. For more information on how Legacy Permissions are applied at the role and organization level, see [Legacy Permissions][10]. |
|
|
||
| See [Set daily quota][5] for more information on setting up, updating or removing the quota. | ||
|
|
||
| If you are unsure whether or when a daily quota has been reached historically, you can verify this in the Event Explorer by searching through the tag datadog_index:{index_name}. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| If you are unsure whether or when a daily quota has been reached historically, you can verify this in the Event Explorer by searching through the tag datadog_index:{index_name}. | |
| To verify if a daily quota has been reached historically, you can search in the Event Explorer with the tag `datadog_index:{index_name}`. |
| By default, Datadog parses all epoch timestamps in Logs with the default timezone set as UTC. | ||
| If logs are arriving with timestamps ahead or behind this time, you may see logs shifted by the number of hours from UTC that the timezone is set to. | ||
|
|
||
| To adjust the timezone of the logs during processing, please refer to the footnotes in Datadog's [Parsing][11] guide in using the timezone parameter as part of the date matcher. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| By default, Datadog parses all epoch timestamps in Logs with the default timezone set as UTC. | |
| If logs are arriving with timestamps ahead or behind this time, you may see logs shifted by the number of hours from UTC that the timezone is set to. | |
| To adjust the timezone of the logs during processing, please refer to the footnotes in Datadog's [Parsing][11] guide in using the timezone parameter as part of the date matcher. | |
| By default, Datadog parses all epoch timestamps in Logs using UTC. If incoming logs use a different timezone, timestamps may appear shifted by the corresponding offset from UTC. | |
| To adjust the timezone of the logs during processing, see the footnotes in Datadog's [Parsing][11] guide on using the `timezone` parameter with the date matcher. |
| [16]: https://docs.datadoghq.com/api/latest/logs-indexes/#get-an-index | ||
| [17]: https://docs.datadoghq.com/api/latest/logs-indexes/#get-an-index | ||
| [18]: https://docs.datadoghq.com/agent/troubleshooting/send_a_flare/?tab=agent |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| [16]: https://docs.datadoghq.com/api/latest/logs-indexes/#get-an-index | |
| [17]: https://docs.datadoghq.com/api/latest/logs-indexes/#get-an-index | |
| [18]: https://docs.datadoghq.com/agent/troubleshooting/send_a_flare/?tab=agent | |
| [16]: /api/latest/logs-indexes/#get-an-index | |
| [17]: /api/latest/logs-indexes/#get-an-index | |
| [18]: /agent/troubleshooting/send_a_flare/?tab=agent | |
| [19]: /logs/log_configuration/processors/?tab=ui#grok-parser |
| Follow these steps to convert a timestamp localization to UTC using the steps from the example provided using Datadog's [Grok Parser] | ||
|
|
||
| 1. Navigate to the [Pipelines][9] page. | ||
|
|
||
| 2. In **Pipelines**, select the correct pipeline matching to your logs (example here?) | ||
|
|
||
| 3. Open the Grok Parser processor that is parsing your logs. | ||
|
|
||
| 4. Given that a local host is logging in UTC+1, we want to adjust the date matcher to account for this difference. The result is that we are adding a comma, and a new string defining the timezone to UTC+1. | ||
|
|
||
| 5. Ensure that the [Log Date Remapper][8] is using the parsed attribute as the official timestamp for the matching logs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| Follow these steps to convert a timestamp localization to UTC using the steps from the example provided using Datadog's [Grok Parser] | |
| 1. Navigate to the [Pipelines][9] page. | |
| 2. In **Pipelines**, select the correct pipeline matching to your logs (example here?) | |
| 3. Open the Grok Parser processor that is parsing your logs. | |
| 4. Given that a local host is logging in UTC+1, we want to adjust the date matcher to account for this difference. The result is that we are adding a comma, and a new string defining the timezone to UTC+1. | |
| 5. Ensure that the [Log Date Remapper][8] is using the parsed attribute as the official timestamp for the matching logs. | |
| Epoch timestamps can be adjusted using the `timezone` parameter in a Grok Parser processor. Follow these steps to convert a localized timestamp to UTC using the example in Datadog’s [Grok Parser][19] guide. | |
| 1. Navigate to the [Pipelines][9] page. | |
| 2. In **Pipelines**, select the correct pipeline matching to your logs. | |
| 3. Open the Grok Parser processor that is parsing your logs. | |
| 4. Given that a local host is logging in UTC+1, adjust the date matcher to account for this difference. The result should add a comma and a new string defining the timezone to UTC+1. | |
| 5. Verify that the [Log Date Remapper][8] is using the parsed attribute as the official timestamp for the matching logs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jmene442 To confirm, do you still want to add an example here? I removed "(example here?)"
| [Logging Without Limits™][14] allows decoupling of log ingestion and indexation, to allow you to save what logs matter most to your organisation. However, when Exclusion Filters are applied to indexes, coverage that is too broad can cause more logs to be excluded than in the intended result. | ||
|
|
||
| Make sure to heavily check exclusion filters and index filters. Parsed JSON logs and unparsed logs may match unexpectedly on index filters, especially when using free text search to match on logs to exclude small strings in logs from indexation. This can cause the entire log, which may have other valuable information, to be dropped from indexing. You can read more about the difference between full text and free text search in [Search Syntax][12]. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| [Logging Without Limits™][14] allows decoupling of log ingestion and indexation, to allow you to save what logs matter most to your organisation. However, when Exclusion Filters are applied to indexes, coverage that is too broad can cause more logs to be excluded than in the intended result. | |
| Make sure to heavily check exclusion filters and index filters. Parsed JSON logs and unparsed logs may match unexpectedly on index filters, especially when using free text search to match on logs to exclude small strings in logs from indexation. This can cause the entire log, which may have other valuable information, to be dropped from indexing. You can read more about the difference between full text and free text search in [Search Syntax][12]. | |
| [Logging Without Limits™][14] decouples log ingestion from indexing, allowing you to retain the logs that matter most. When exclusion filters applied to indexes are too broad, they may exclude more logs than intended. | |
| Review both exclusion filters and index filters carefully. Parsed and unparsed JSON logs can match index filters in unexpected ways, particularly when free-text search is used to exclude short strings. This can result in entire logs being dropped from indexing, even when they contain other valuable data. For details on the differences between full-text and free-text search, see [Search Syntax][12]. |
| If Logs do not appear to be indexed, or at a smaller or higher rate than expected, check Estimated Usage Metric volumes to verify | ||
|
|
||
| Tag such as `datadog_index`, `datadog_is_excluded`, `service` and `status` are available, depending on the metric, for filtering your query down to specific reserved attributes such as `service` and `status`. You can then also filter metrics such as `datadog.estimated_usage.logs.ingested_events` by whether they are excluded, and the specific index that is either indexing the log, or excluding the log based on filters. | ||
|
|
||
| If a datadog_index tag is presented as N/A for a metric datapoint, the log for that datapoint does not match any of the indexes in your organisation. Consider the order and filter queries of your indexes, if they may be excluding certain types of logs. Estimated Usage Metrics do not respect [Daily Quotas][13]. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| If Logs do not appear to be indexed, or at a smaller or higher rate than expected, check Estimated Usage Metric volumes to verify | |
| Tag such as `datadog_index`, `datadog_is_excluded`, `service` and `status` are available, depending on the metric, for filtering your query down to specific reserved attributes such as `service` and `status`. You can then also filter metrics such as `datadog.estimated_usage.logs.ingested_events` by whether they are excluded, and the specific index that is either indexing the log, or excluding the log based on filters. | |
| If a datadog_index tag is presented as N/A for a metric datapoint, the log for that datapoint does not match any of the indexes in your organisation. Consider the order and filter queries of your indexes, if they may be excluding certain types of logs. Estimated Usage Metrics do not respect [Daily Quotas][13]. | |
| If logs are not indexed, or are indexed at a higher or lower rate than expected, review Estimated Usage Metrics to verify log volumes. | |
| Depending on the metric, tags such as `datadog_index`, `datadog_is_excluded`, `service,` and `status` are available for filtering. Use these tags to filter metrics such as `datadog.estimated_usage.logs.ingested_events` by exclusion status and by the index that is indexing or excluding the logs. | |
| If the `datadog_index` tag is set to `N/A` for a metric datapoint, the corresponding logs do not match any index in your organization. Review index order and filter queries to identify potential exclusions. | |
| **Note**: Estimated Usage Metrics do not respect [Daily Quotas][13]. |
|
|
||
| ## Estimated Usage Metrics | ||
|
|
||
| If Logs do not appear to be indexed, or at a smaller or higher rate than expected, check Estimated Usage Metric volumes to verify |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a link to the Estimated Usage Metrics Volumes?
| ### Raw Log | ||
| To collect a raw log, collect the log directly from the source that is generating the log, dependent on your architecture or logger setup. | ||
| Attach the log either as a text file, or as JSON directly to your support ticket. | ||
|
|
||
| ### | ||
| If the log appears in the Live Tail, but is not appearing in the Log Explorer, please share the result of the call to the [Get All Indexes][16] API endpoint in your support ticket. | ||
| - If you have a large number of indexes in your organisation, please be sure to check estimated usage metrics using the steps above to verify if possible, which | ||
| - Then, you can use the [Get an Index][17] API call to return the result for that index, and upload that to the support ticket. | ||
|
|
||
| ### Upload a Flare | ||
| If using the Agent to send logs, and logs are not appearing at all in the Datadog UI, send an [Agent Flare][18] to the support ticket. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Table might be easier to scan and read through:
| ### Raw Log | |
| To collect a raw log, collect the log directly from the source that is generating the log, dependent on your architecture or logger setup. | |
| Attach the log either as a text file, or as JSON directly to your support ticket. | |
| ### | |
| If the log appears in the Live Tail, but is not appearing in the Log Explorer, please share the result of the call to the [Get All Indexes][16] API endpoint in your support ticket. | |
| - If you have a large number of indexes in your organisation, please be sure to check estimated usage metrics using the steps above to verify if possible, which | |
| - Then, you can use the [Get an Index][17] API call to return the result for that index, and upload that to the support ticket. | |
| ### Upload a Flare | |
| If using the Agent to send logs, and logs are not appearing at all in the Datadog UI, send an [Agent Flare][18] to the support ticket. | |
| | Information | Description | | |
| |------------|-------------| | |
| | **Raw log sample** | Collect the log directly from the source generating it, based on your architecture or logger configuration. Attach the log to the support ticket as a **text file** or **raw JSON**. | | |
| | **Indexes configuration (Live Tail only)** | If the log appears in Live Tail but not in the Log Explorer, include the response from the [Get All Indexes][16] API call. If your organization has many indexes, review Estimated Usage Metrics to identify the relevant index, then include the response from the [Get an Index][17] API call for that index. | | |
| | **Agent flare** | If logs are sent using the Agent and do not appear anywhere in the Datadog UI, submit an [Agent Flare][18] with the support ticket. | | |
| If a datadog_index tag is presented as N/A for a metric datapoint, the log for that datapoint does not match any of the indexes in your organisation. Consider the order and filter queries of your indexes, if they may be excluding certain types of logs. Estimated Usage Metrics do not respect [Daily Quotas][13]. | ||
|
|
||
| ## Create a support ticket | ||
| If the above troubleshooting steps do not resolve your issues with missing logs in Datadog, create a [support ticket][15]. If possible, include the following information in your support ticket: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| If the above troubleshooting steps do not resolve your issues with missing logs in Datadog, create a [support ticket][15]. If possible, include the following information in your support ticket: | |
| If the above troubleshooting steps do not resolve your issues with missing logs in Datadog, create a [support ticket][15]. If possible, include the following information: |
What does this PR do? What is the motivation?
Merge instructions
Merge readiness:
For Datadog employees:
Your branch name MUST follow the
<name>/<description>convention and include the forward slash (/). Without this format, your pull request will not pass CI, the GitLab pipeline will not run, and you won't get a branch preview. Getting a branch preview makes it easier for us to check any issues with your PR, such as broken links.If your branch doesn't follow this format, rename it or create a new branch and PR.
[6/5/2025] Merge queue has been disabled on the documentation repo. If you have write access to the repo, the PR has been reviewed by a Documentation team member, and all of the required checks have passed, you can use the Squash and Merge button to merge the PR. If you don't have write access, or you need help, reach out in the #documentation channel in Slack.
Additional notes