Skip to content

Commit a237de8

Browse files
committed
Moves faq to a new page and adds faq about log volume.
1 parent d86748a commit a237de8

File tree

2 files changed

+38
-27
lines changed

2 files changed

+38
-27
lines changed
Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,37 @@
1+
---
2+
pcx_content_type: faq
3+
title: FAQ
4+
structured_data: true
5+
description: Find answers to common questions about Log Explorer.
6+
sidebar:
7+
order: 152
8+
---
9+
10+
### Which fields (or columns) are available for querying?
11+
12+
All fields listed in [Datasets](/logs/logpush/logpush-job/datasets/) for the [supported datasets](/log-explorer/manage-datasets/#supported-datasets) are viewable in Log Explorer.
13+
14+
### Why does my query not complete or time out?
15+
16+
Log Explorer performs best when query parameters focus on narrower ranges of time. You may experience query timeouts when your query would return a large quantity of data. Consider refining your query to improve performance.
17+
18+
### Why don't I see any logs in my queries after enabling the dataset?
19+
20+
Log Explorer starts ingesting logs from the moment you enable the dataset. It will not display logs for events that occurred before the dataset was enabled. Make sure that new events have been generated since enabling the dataset, and check again.
21+
22+
### My query returned an error. How do I figure out what went wrong?
23+
24+
We are actively working on improving error codes. If you receive a generic error, check your SQL syntax (if you are using the custom SQL feature), make sure you have included a date and a limit. If the query still fails it is likely timing out. Try refining your filters.
25+
26+
### Where is the data stored?
27+
28+
The data is stored in Cloudflare R2. Each Log Explorer dataset is stored on a per-customer level, similar to Cloudflare D1, ensuring that your data is kept separate from that of other customers. In the future, this single-tenant storage model will provide you with the flexibility to create your own retention policies and decide in which regions you want to store your data.
29+
30+
### Does Log Explorer support Customer Metadata Boundary?
31+
32+
Customer metadata boundary is currently not supported for Log Explorer.
33+
34+
### Are there any constraints on the log volume that Log Explorer can support?
35+
36+
We are continually scaling the Log Explorer data platform. At present, Log Explorer supports log ingestion rates of up to 50,000 records per second. If your needs exceed this, contact your account team.
37+

src/content/docs/log-explorer/log-search.mdx

Lines changed: 1 addition & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -114,30 +114,4 @@ WHERE
114114

115115
- Narrow your query time frame. Focus on a smaller time window to reduce the volume of data processed. This helps avoid querying excessive amounts of data and speeds up response times.
116116
- Omit `ORDER BY` and `LIMIT` clauses. These clauses can slow down queries, especially when dealing with large datasets. For queries that return a large number of records, reduce the time frame instead of limiting to the newest `N` records from a broader time frame.
117-
- Select only necessary columns. For example, replace `SELECT *` with the list of specific columns you need. You can also use `SELECT RayId` as a first iteration and follow up with a query that filters by the Ray IDs to retrieve additional columns. Additionally, you can use `SELECT COUNT(*)` to probe for time frames with matching records without retrieving the full dataset.
118-
119-
## FAQs
120-
121-
### Which fields (or columns) are available for querying?
122-
123-
All fields listed in [Datasets](/logs/logpush/logpush-job/datasets/) for the [supported datasets](/log-explorer/manage-datasets/#supported-datasets) are viewable in Log Explorer.
124-
125-
### Why does my query not complete or time out?
126-
127-
Log Explorer performs best when query parameters focus on narrower ranges of time. You may experience query timeouts when your query would return a large quantity of data. Consider refining your query to improve performance.
128-
129-
### Why don't I see any logs in my queries after enabling the dataset?
130-
131-
Log Explorer starts ingesting logs from the moment you enable the dataset. It will not display logs for events that occurred before the dataset was enabled. Make sure that new events have been generated since enabling the dataset, and check again.
132-
133-
### My query returned an error. How do I figure out what went wrong?
134-
135-
We are actively working on improving error codes. If you receive a generic error, check your SQL syntax (if you are using the custom SQL feature), make sure you have included a date and a limit. If the query still fails it is likely timing out. Try refining your filters.
136-
137-
### Where is the data stored?
138-
139-
The data is stored in Cloudflare R2. Each Log Explorer dataset is stored on a per-customer level, similar to Cloudflare D1, ensuring that your data is kept separate from that of other customers. In the future, this single-tenant storage model will provide you with the flexibility to create your own retention policies and decide in which regions you want to store your data.
140-
141-
### Does Log Explorer support Customer Metadata Boundary?
142-
143-
Customer metadata boundary is currently not supported for Log Explorer.
117+
- Select only necessary columns. For example, replace `SELECT *` with the list of specific columns you need. You can also use `SELECT RayId` as a first iteration and follow up with a query that filters by the Ray IDs to retrieve additional columns. Additionally, you can use `SELECT COUNT(*)` to probe for time frames with matching records without retrieving the full dataset.

0 commit comments

Comments
 (0)