Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
73 changes: 73 additions & 0 deletions content/en/logs/troubleshooting/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,12 @@

You cannot see any logs in the [Log Explorer][2] or [Live Tail][3]. This may be happening because your role is part of a restriction query.

If you are unable to access your Restriction Queries in Datadog, please contact your Datadog Administrator to verify if your role is affected.

Check warning on line 11 in content/en/logs/troubleshooting/_index.md

View workflow job for this annotation

GitHub Actions / vale

Datadog.words_case_insensitive

Use '' instead of 'please'.

See [Check Restrictions Queries][4] for more information on configuring Logs RBAC data access controls.

Furthermore, Legacy Permissions can also affect the ability to see Logs, particularly in the [Log Explorer][2] . You may find yourself unable to view logs from certain indexes, or only one index at a time. See [Legacy Permissions][10] for more information on how these can be applied to your role and organisation.
Comment on lines +11 to +15
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
If you are unable to access your Restriction Queries in Datadog, please contact your Datadog Administrator to verify if your role is affected.
See [Check Restrictions Queries][4] for more information on configuring Logs RBAC data access controls.
Furthermore, Legacy Permissions can also affect the ability to see Logs, particularly in the [Log Explorer][2] . You may find yourself unable to view logs from certain indexes, or only one index at a time. See [Legacy Permissions][10] for more information on how these can be applied to your role and organisation.
If you are unable to access your Restriction Queries in Datadog, contact your Datadog Administrator to verify if your role is affected.
See [Check Restrictions Queries][4] for more information on configuring Logs RBAC data access controls.
**Legacy Permissions** can also restrict access to Logs, particularly in the [Log Explorer][2]. Depending on configuration, access may be limited to specific indexes or to a single index at a time. For more information on how Legacy Permissions are applied at the role and organization level, see [Legacy Permissions][10].


## Missing logs - logs daily quota reached

You have not made any changes to your log configuration, but the [Log Explorer][2] shows that logs are missing for today. This may be happening because you have reached your daily quota.
Expand All @@ -18,13 +22,44 @@

See [Set daily quota][5] for more information on setting up, updating or removing the quota.

If you are unsure whether or when a daily quota has been reached historically, you can verify this in the Event Explorer by searching through the tag datadog_index:{index_name}.

Check notice on line 25 in content/en/logs/troubleshooting/_index.md

View workflow job for this annotation

GitHub Actions / vale

Datadog.sentencelength

Suggestion: Try to keep your sentence length to 25 words or fewer.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
If you are unsure whether or when a daily quota has been reached historically, you can verify this in the Event Explorer by searching through the tag datadog_index:{index_name}.
To verify if a daily quota has been reached historically, you can search in the Event Explorer with the tag `datadog_index:{index_name}`.


{{< img src="logs/troubleshooting/daily_quota_event.png" alt="An event explaining the time at which a daily quota was reached" style="width:90%" >}}

## Missing logs - timestamp outside of the ingestion window

Logs with a timestamp further than 18 hours in the past are dropped at intake.
Fix the issue at the source by checking which `service` and `source` are impacted with the `datadog.estimated_usage.logs.drop_count` metric.

## Missing logs - timestamp not aligned with timezone

By default, Datadog parses all epoch timestamps in Logs with the default timezone set as UTC.
If logs are arriving with timestamps ahead or behind this time, you may see logs shifted by the number of hours from UTC that the timezone is set to.

Check notice on line 37 in content/en/logs/troubleshooting/_index.md

View workflow job for this annotation

GitHub Actions / vale

Datadog.sentencelength

Suggestion: Try to keep your sentence length to 25 words or fewer.

To adjust the timezone of the logs during processing, please refer to the footnotes in Datadog's [Parsing][11] guide in using the timezone parameter as part of the date matcher.

Check warning on line 39 in content/en/logs/troubleshooting/_index.md

View workflow job for this annotation

GitHub Actions / vale

Datadog.words_case_insensitive

Use 'see', 'read', or 'follow' instead of 'refer to'.

Check warning on line 39 in content/en/logs/troubleshooting/_index.md

View workflow job for this annotation

GitHub Actions / vale

Datadog.words_case_insensitive

Use '' instead of 'please'.

Check notice on line 39 in content/en/logs/troubleshooting/_index.md

View workflow job for this annotation

GitHub Actions / vale

Datadog.sentencelength

Suggestion: Try to keep your sentence length to 25 words or fewer.
Comment on lines +36 to +39
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
By default, Datadog parses all epoch timestamps in Logs with the default timezone set as UTC.
If logs are arriving with timestamps ahead or behind this time, you may see logs shifted by the number of hours from UTC that the timezone is set to.
To adjust the timezone of the logs during processing, please refer to the footnotes in Datadog's [Parsing][11] guide in using the timezone parameter as part of the date matcher.
By default, Datadog parses all epoch timestamps in Logs using UTC. If incoming logs use a different timezone, timestamps may appear shifted by the corresponding offset from UTC.
To adjust the timezone of the logs during processing, see the footnotes in Datadog's [Parsing][11] guide on using the `timezone` parameter with the date matcher.

Epoch timestamps can be adjusted using the timezone parameter in a Grok Parser processor to adjust localizations.

Follow these steps to convert a timestamp localization to UTC using the steps from the example provided using Datadog's [Grok Parser]

1. Navigate to the [Pipelines][9] page.

2. In **Pipelines**, select the correct pipeline matching to your logs (example here?)

3. Open the Grok Parser processor that is parsing your logs.

4. Given that a local host is logging in UTC+1, we want to adjust the date matcher to account for this difference. The result is that we are adding a comma, and a new string defining the timezone to UTC+1.

Check warning on line 50 in content/en/logs/troubleshooting/_index.md

View workflow job for this annotation

GitHub Actions / vale

Datadog.pronouns

Avoid first-person pronouns such as 'we'.

Check warning on line 50 in content/en/logs/troubleshooting/_index.md

View workflow job for this annotation

GitHub Actions / vale

Datadog.pronouns

Avoid first-person pronouns such as 'we'.

5. Ensure that the [Log Date Remapper][8] is using the parsed attribute as the official timestamp for the matching logs.

Check warning on line 52 in content/en/logs/troubleshooting/_index.md

View workflow job for this annotation

GitHub Actions / vale

Datadog.words_case_insensitive

Use 'helps' or 'helps ensure' instead of 'Ensure'.
Comment on lines +42 to +52
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Follow these steps to convert a timestamp localization to UTC using the steps from the example provided using Datadog's [Grok Parser]
1. Navigate to the [Pipelines][9] page.
2. In **Pipelines**, select the correct pipeline matching to your logs (example here?)
3. Open the Grok Parser processor that is parsing your logs.
4. Given that a local host is logging in UTC+1, we want to adjust the date matcher to account for this difference. The result is that we are adding a comma, and a new string defining the timezone to UTC+1.
5. Ensure that the [Log Date Remapper][8] is using the parsed attribute as the official timestamp for the matching logs.
Epoch timestamps can be adjusted using the `timezone` parameter in a Grok Parser processor. Follow these steps to convert a localized timestamp to UTC using the example in Datadog’s [Grok Parser][19] guide.
1. Navigate to the [Pipelines][9] page.
2. In **Pipelines**, select the correct pipeline matching to your logs.
3. Open the Grok Parser processor that is parsing your logs.
4. Given that a local host is logging in UTC+1, adjust the date matcher to account for this difference. The result should add a comma and a new string defining the timezone to UTC+1.
5. Verify that the [Log Date Remapper][8] is using the parsed attribute as the official timestamp for the matching logs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jmene442 To confirm, do you still want to add an example here? I removed "(example here?)"


Go to the [Log Explorer][2] to see the logs now appearing in line with their original timestamp.

Check warning on line 54 in content/en/logs/troubleshooting/_index.md

View workflow job for this annotation

GitHub Actions / vale

Datadog.tense

Avoid temporal words like 'now'.


## Unable to parse timestamp key from JSON logs

By default, Datadog expects timestamp attributes to be configured to a recognised date format. The recognised formats are ISO8601, UNIX (the milliseconds EPOCH format), and RFC3164.

Check warning on line 59 in content/en/logs/troubleshooting/_index.md

View workflow job for this annotation

GitHub Actions / vale

Datadog.americanspelling

In general, use American spelling instead of 'recognised'.

Check warning on line 59 in content/en/logs/troubleshooting/_index.md

View workflow job for this annotation

GitHub Actions / vale

Datadog.americanspelling

In general, use American spelling instead of 'recognised'.

Timestamps similar to but not matching this format may still be dropped even if similar in nature, such as the nanoseconds EPOCH format.
Comment on lines +59 to +61
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
By default, Datadog expects timestamp attributes to be configured to a recognised date format. The recognised formats are ISO8601, UNIX (the milliseconds EPOCH format), and RFC3164.
Timestamps similar to but not matching this format may still be dropped even if similar in nature, such as the nanoseconds EPOCH format.
Datadog requires timestamp attributes to use one of the supported date formats:
- ISO8601
- UNIX (the milliseconds EPOCH format)
- RFC3164
Timestamps that do not exactly match these formats may be dropped, even if they are similar (for example, epoch timestamps in nanoseconds).


If you are unable to convert the timestamp of JSON logs to a [recognized date format][6] before they are ingested into Datadog, follow these steps to convert and map the timestamps using Datadog's [arithmetic processor][5] and [log date remapper][6]:

1. Navigate to the [Pipelines][9] page.
Expand All @@ -49,6 +84,35 @@

There is an additional truncation in fields that applies only to indexed logs: the value is truncated to 75 KiB for the message field and 25 KiB for non-message fields. Datadog stores the full text, and it remains visible in regular list queries in the Log Explorer. However, the truncated version is displayed when performing a grouped query, such as when grouping logs by that truncated field or performing similar operations that display that specific field.

## Logs present in Live Tail, but missing from Logs Explorer

Check warning on line 87 in content/en/logs/troubleshooting/_index.md

View workflow job for this annotation

GitHub Actions / vale

Datadog.headings

'Logs present in Live Tail, but missing from Logs Explorer' should use sentence-style capitalization.

[Logging Without Limits™][14] allows decoupling of log ingestion and indexation, to allow you to save what logs matter most to your organisation. However, when Exclusion Filters are applied to indexes, coverage that is too broad can cause more logs to be excluded than in the intended result.

Make sure to heavily check exclusion filters and index filters. Parsed JSON logs and unparsed logs may match unexpectedly on index filters, especially when using free text search to match on logs to exclude small strings in logs from indexation. This can cause the entire log, which may have other valuable information, to be dropped from indexing. You can read more about the difference between full text and free text search in [Search Syntax][12].

Check notice on line 91 in content/en/logs/troubleshooting/_index.md

View workflow job for this annotation

GitHub Actions / vale

Datadog.sentencelength

Suggestion: Try to keep your sentence length to 25 words or fewer.
Comment on lines +89 to +91
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
[Logging Without Limits™][14] allows decoupling of log ingestion and indexation, to allow you to save what logs matter most to your organisation. However, when Exclusion Filters are applied to indexes, coverage that is too broad can cause more logs to be excluded than in the intended result.
Make sure to heavily check exclusion filters and index filters. Parsed JSON logs and unparsed logs may match unexpectedly on index filters, especially when using free text search to match on logs to exclude small strings in logs from indexation. This can cause the entire log, which may have other valuable information, to be dropped from indexing. You can read more about the difference between full text and free text search in [Search Syntax][12].
[Logging Without Limits™][14] decouples log ingestion from indexing, allowing you to retain the logs that matter most. When exclusion filters applied to indexes are too broad, they may exclude more logs than intended.
Review both exclusion filters and index filters carefully. Parsed and unparsed JSON logs can match index filters in unexpected ways, particularly when free-text search is used to exclude short strings. This can result in entire logs being dropped from indexing, even when they contain other valuable data. For details on the differences between full-text and free-text search, see [Search Syntax][12].


## Estimated Usage Metrics

If Logs do not appear to be indexed, or at a smaller or higher rate than expected, check Estimated Usage Metric volumes to verify
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a link to the Estimated Usage Metrics Volumes?


Tag such as `datadog_index`, `datadog_is_excluded`, `service` and `status` are available, depending on the metric, for filtering your query down to specific reserved attributes such as `service` and `status`. You can then also filter metrics such as `datadog.estimated_usage.logs.ingested_events` by whether they are excluded, and the specific index that is either indexing the log, or excluding the log based on filters.

Check notice on line 97 in content/en/logs/troubleshooting/_index.md

View workflow job for this annotation

GitHub Actions / vale

Datadog.sentencelength

Suggestion: Try to keep your sentence length to 25 words or fewer.

If a datadog_index tag is presented as N/A for a metric datapoint, the log for that datapoint does not match any of the indexes in your organisation. Consider the order and filter queries of your indexes, if they may be excluding certain types of logs. Estimated Usage Metrics do not respect [Daily Quotas][13].

Check notice on line 99 in content/en/logs/troubleshooting/_index.md

View workflow job for this annotation

GitHub Actions / vale

Datadog.sentencelength

Suggestion: Try to keep your sentence length to 25 words or fewer.
Comment on lines +95 to +99
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
If Logs do not appear to be indexed, or at a smaller or higher rate than expected, check Estimated Usage Metric volumes to verify
Tag such as `datadog_index`, `datadog_is_excluded`, `service` and `status` are available, depending on the metric, for filtering your query down to specific reserved attributes such as `service` and `status`. You can then also filter metrics such as `datadog.estimated_usage.logs.ingested_events` by whether they are excluded, and the specific index that is either indexing the log, or excluding the log based on filters.
If a datadog_index tag is presented as N/A for a metric datapoint, the log for that datapoint does not match any of the indexes in your organisation. Consider the order and filter queries of your indexes, if they may be excluding certain types of logs. Estimated Usage Metrics do not respect [Daily Quotas][13].
If logs are not indexed, or are indexed at a higher or lower rate than expected, review Estimated Usage Metrics to verify log volumes.
Depending on the metric, tags such as `datadog_index`, `datadog_is_excluded`, `service,` and `status` are available for filtering. Use these tags to filter metrics such as `datadog.estimated_usage.logs.ingested_events` by exclusion status and by the index that is indexing or excluding the logs.
If the `datadog_index` tag is set to `N/A` for a metric datapoint, the corresponding logs do not match any index in your organization. Review index order and filter queries to identify potential exclusions.
**Note**: Estimated Usage Metrics do not respect [Daily Quotas][13].


## Create a support ticket
If the above troubleshooting steps do not resolve your issues with missing logs in Datadog, create a [support ticket][15]. If possible, include the following information in your support ticket:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
If the above troubleshooting steps do not resolve your issues with missing logs in Datadog, create a [support ticket][15]. If possible, include the following information in your support ticket:
If the above troubleshooting steps do not resolve your issues with missing logs in Datadog, create a [support ticket][15]. If possible, include the following information:


### Raw Log
To collect a raw log, collect the log directly from the source that is generating the log, dependent on your architecture or logger setup.
Attach the log either as a text file, or as JSON directly to your support ticket.

###
If the log appears in the Live Tail, but is not appearing in the Log Explorer, please share the result of the call to the [Get All Indexes][16] API endpoint in your support ticket.

Check notice on line 109 in content/en/logs/troubleshooting/_index.md

View workflow job for this annotation

GitHub Actions / vale

Datadog.sentencelength

Suggestion: Try to keep your sentence length to 25 words or fewer.
- If you have a large number of indexes in your organisation, please be sure to check estimated usage metrics using the steps above to verify if possible, which
- Then, you can use the [Get an Index][17] API call to return the result for that index, and upload that to the support ticket.

### Upload a Flare
If using the Agent to send logs, and logs are not appearing at all in the Datadog UI, send an [Agent Flare][18] to the support ticket.

Check notice on line 114 in content/en/logs/troubleshooting/_index.md

View workflow job for this annotation

GitHub Actions / vale

Datadog.sentencelength

Suggestion: Try to keep your sentence length to 25 words or fewer.
Comment on lines +104 to +114
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Table might be easier to scan and read through:

Suggested change
### Raw Log
To collect a raw log, collect the log directly from the source that is generating the log, dependent on your architecture or logger setup.
Attach the log either as a text file, or as JSON directly to your support ticket.
###
If the log appears in the Live Tail, but is not appearing in the Log Explorer, please share the result of the call to the [Get All Indexes][16] API endpoint in your support ticket.
- If you have a large number of indexes in your organisation, please be sure to check estimated usage metrics using the steps above to verify if possible, which
- Then, you can use the [Get an Index][17] API call to return the result for that index, and upload that to the support ticket.
### Upload a Flare
If using the Agent to send logs, and logs are not appearing at all in the Datadog UI, send an [Agent Flare][18] to the support ticket.
| Information | Description |
|------------|-------------|
| **Raw log sample** | Collect the log directly from the source generating it, based on your architecture or logger configuration. Attach the log to the support ticket as a **text file** or **raw JSON**. |
| **Indexes configuration (Live Tail only)** | If the log appears in Live Tail but not in the Log Explorer, include the response from the [Get All Indexes][16] API call. If your organization has many indexes, review Estimated Usage Metrics to identify the relevant index, then include the response from the [Get an Index][17] API call for that index. |
| **Agent flare** | If logs are sent using the Agent and do not appear anywhere in the Datadog UI, submit an [Agent Flare][18] with the support ticket. |


[1]: /help/
[2]: https://app.datadoghq.com/logs
[3]: https://app.datadoghq.com/logs/livetail
Expand All @@ -58,3 +122,12 @@
[7]: /logs/log_configuration/processors/?tab=ui#arithmetic-processor
[8]: /logs/log_configuration/processors/?tab=ui#log-date-remapper
[9]: https://app.datadoghq.com/logs/pipelines
[10]: /logs/guide/logs-rbac-permissions/?tab=ui#legacy-permissions
[11]: /logs/log_configuration/parsing/?tab=matchers#parsing-dates
[12]: /logs/explorer/search_syntax/#full-text-search
[13]: /logs/log_configuration/indexes/#set-daily-quota
[14]: /logs/guide/getting-started-lwl/
[15]: https://help.datadoghq.com/hc/en-us/requests/new
[16]: https://docs.datadoghq.com/api/latest/logs-indexes/#get-an-index
[17]: https://docs.datadoghq.com/api/latest/logs-indexes/#get-an-index
[18]: https://docs.datadoghq.com/agent/troubleshooting/send_a_flare/?tab=agent
Comment on lines +131 to +133
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
[16]: https://docs.datadoghq.com/api/latest/logs-indexes/#get-an-index
[17]: https://docs.datadoghq.com/api/latest/logs-indexes/#get-an-index
[18]: https://docs.datadoghq.com/agent/troubleshooting/send_a_flare/?tab=agent
[16]: /api/latest/logs-indexes/#get-an-index
[17]: /api/latest/logs-indexes/#get-an-index
[18]: /agent/troubleshooting/send_a_flare/?tab=agent
[19]: /logs/log_configuration/processors/?tab=ui#grok-parser

Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading