Skip to content

Commit cac0849

Browse files
authored
Merge branch 'cloudflare:production' into master
2 parents 49a952a + 1ea1c91 commit cac0849

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

46 files changed

+345
-146
lines changed

products/distributed-web/src/content/ipfs-gateway/automated-deployment.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,3 +24,6 @@ hash. There are several tools that help with different parts of this:
2424
- [dnslink-cloudflare](https://github.com/ipfs-shipyard/dnslink-cloudflare) is a
2525
script to programmatically update DNSLink records. This can be run with the
2626
`-Q` flag of `ipfs add` that only outputs the top-level hash.
27+
- [Fission's IPNS support](https://guide.fission.codes/developers/custom-domains/using-cloudflare-ipfs-gateway)
28+
lets you use the Fission IPFS app publishing system from the CLI
29+
or from GitHub Actions, while using Cloudflare-managed DNS and gateway.
Lines changed: 192 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,192 @@
1+
---
2+
title: Analytics
3+
order: 45
4+
pcx-content-type: how-to
5+
---
6+
7+
# Load balancing analytics
8+
9+
Using load balancing analytics, you can:
10+
- Evaluate traffic flow.
11+
- Assess the health status of origin servers in your pools.
12+
- Review changes in pools and pool health over time.
13+
14+
<Aside type="note">
15+
16+
Load balancing analytics are only available to customers on paid plans (Pro, Business, and Enterprise).
17+
18+
</Aside>
19+
20+
## Dashboard Analytics
21+
22+
### Overview metrics
23+
24+
To view **Overview** metrics for your load balancer, go to **Traffic** > **Load Balancing Analytics**.
25+
26+
These metrics show the number of requests routed to specific pools within a load balancer, helping you:
27+
- Evaluate the effects of adding or removing a pool.
28+
- Decide when to create new origin pools.
29+
- Plan for peak traffic demands and future infrastructure needs.
30+
31+
Add additional filters for specific pools, times, regions, and origins.
32+
33+
### Latency and Logs
34+
35+
To view latency and log information for your load balancer, go to **Traffic** > **Load Balancing Analytics** > **Latency**.
36+
37+
**Latency** metrics show an interactive map, helping you identify regions with **Unhealthy** or **Slow** pools.
38+
39+
**Logs** provide a history of all origin server status changes and how they affect your load balancing pools.
40+
41+
## GraphQL Analytics
42+
43+
For more flexibility, get load balancing metrics directly from the [GraphQL Analytics API](https://developers.cloudflare.com/analytics/graphql-api).
44+
45+
Get started with a sample query:
46+
47+
<details>
48+
<summary>Requests per pool</summary>
49+
<div>
50+
51+
This query shows the number of requests each pool receives from each location in Cloudflare's global network.
52+
53+
```graphql
54+
---
55+
header: Query
56+
---
57+
{
58+
viewer {
59+
zones(filter: {zoneTag: "your Zone ID"}) {
60+
loadBalancingRequestsAdaptiveGroups(
61+
limit: 100,
62+
filter: {
63+
datetime_geq: "2021-06-26T00:00:00Z",
64+
datetime_leq: "2021-06-26T03:00:00Z",
65+
lbName:"lb.example.com"
66+
},
67+
orderBy: [datetimeFifteenMinutes_DESC]
68+
) {
69+
count
70+
dimensions {
71+
datetimeFifteenMinutes
72+
coloCode
73+
selectedPoolName
74+
}
75+
}
76+
}
77+
}
78+
}
79+
```
80+
81+
```json
82+
---
83+
header: Response (truncated)
84+
---
85+
{
86+
"data": {
87+
"viewer": {
88+
"zones": [
89+
{
90+
"loadBalancingRequestsAdaptiveGroups": [
91+
{
92+
"count": 4,
93+
"dimensions": {
94+
"coloCode": "IAD",
95+
"datetimeFifteenMinutes": "2021-06-26T00:45:00Z",
96+
"selectedPoolName": "us-east"
97+
}
98+
},
99+
...
100+
]
101+
}
102+
]
103+
}
104+
}
105+
}
106+
```
107+
108+
</div>
109+
110+
</details>
111+
112+
<details>
113+
<summary>Requests per data center</summary>
114+
<div>
115+
116+
This query shows the weighted, round-trip time measurement (`avgRttMs`) for individual requets from a specific data center (for example, Singapore or `SIN`) to each pool in a specific load balancer.
117+
118+
```graphql
119+
---
120+
header: Query
121+
---
122+
{
123+
viewer {
124+
zones(filter: {zoneTag: "your Zone ID"}) {
125+
loadBalancingRequestsAdapative(
126+
limit: 100,
127+
filter: {
128+
datetime_geq: "2021-06-26T00:00:00Z",
129+
datetime_leq: "2021-06-26T03:00:00Z",
130+
lbName:"lb.example.com",
131+
coloCode: "SIN"
132+
},
133+
orderBy: [datetime_DESC]
134+
) {
135+
selectedPoolName
136+
pools {
137+
poolName
138+
healthy
139+
healthCheckEnabled
140+
avgRttMs
141+
}
142+
}
143+
}
144+
}
145+
}
146+
```
147+
148+
```json
149+
---
150+
header: Response (truncated)
151+
---
152+
{
153+
"data": {
154+
"viewer": {
155+
"zones": [
156+
{
157+
"loadBalancingRequestsAdaptive": [
158+
{
159+
"pools": [
160+
{
161+
"avgRttMs": 67,
162+
"healthCheckEnabled": 1,
163+
"healthy": 1,
164+
"poolName": "asia-ne"
165+
},
166+
{
167+
"avgRttMs": 156,
168+
"healthCheckEnabled": 1,
169+
"healthy": 1,
170+
"poolName": "us-east_and_asia-ne"
171+
},
172+
{
173+
"avgRttMs": 237,
174+
"healthCheckEnabled": 1,
175+
"healthy": 1,
176+
"poolName": "us-east"
177+
},
178+
],
179+
"selectedPoolName": "asia-ne"
180+
},
181+
...
182+
]
183+
}
184+
]
185+
}
186+
}
187+
}
188+
```
189+
190+
</div>
191+
192+
</details>

products/load-balancing/src/content/understand-basics/pools.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -224,7 +224,10 @@ For example, you might have a pool with origins hosted in multiple AppEngine pro
224224

225225
Since these examples require specific hostnames per origin, your load balancer could not properly route traffic _without_ a `Host` header override.
226226

227-
If you need an origin `Host` header override, add it when [creating](/create-load-balancer-ui#create-and-add-origin-pools) or editing a pool. For security reasons, this header also needs to be a subdomain of the overall zone. See [Configure Cloudflare and Heroku](https://support.cloudflare.com/hc/articles/205893698) for more details.
227+
If you need an origin `Host` header override, add it when [creating](/create-load-balancer-ui#create-and-add-origin-pools) or editing a pool. For security reasons, this header must meet one of the following criteria:
228+
- Is a subdomain of a zone associated with this account
229+
- Matches the origin address
230+
- Publicly resolves to the origin address
228231

229232
For details about how origin and monitor `Host` headers interact, see [Host header prioritization](/understand-basics/monitors#host-header-prioritization).
230233

products/logs/src/content/faq/index.md

Lines changed: 43 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -9,48 +9,76 @@ pcx-content-type: faq
99

1010
### Once a request has passed through the Cloudflare network, how soon are the logs available?
1111

12-
Logs become available in approximately 1 to 5 minutes.
13-
14-
In the best case, logs take about 1 minute to process, and so we require that calls to the **Logpull API** be for time periods of at least 1 minute in the past. For example, if it’s 9:43 now, you can ask for logs processed between 9:41 and 9:42. The response will include logs for requests that passed through our network between 9:41 and 9:42 and potentially earlier. It’s normal for our processing to take between 3 and 4 minutes, so when you ask for that same time period, you may also see logs of requests that passed through our network at 9:39 or earlier.
15-
1612
When using **Logpush**, logs are pushed in batches as soon as possible. For example, if you receive a file at 10:10, the file consists of logs that were processed shortly before 10:10.
1713

18-
These timings are only a guideline, not a guarantee, and may depend on network conditions, the request volume for your domain, and other factors. Although we try to get the logs to you as fast as possible, we prioritize not losing log data over speed. On rare occasions, you may see a longer delay. In this case, you don’t need to take any action--the logs will be available as soon as they’re processed.
14+
When using **Logpull**, logs become available in approximately one to five minutes. Cloudflare requires that calls to the **Logpull API** be for time periods of at least one minute in the past. For example, if it is 9:43 now, you can ask for logs processed between 9:41 and 9:42. The response will include logs for requests that passed through our network between 9:41 and 9:42 and potentially earlier. It is normal for our processing to take between three and four minutes, so when you ask for that same time period, you may also see logs of requests that passed through our network at 9:39 or earlier.
15+
16+
These timings are only a guideline, not a guarantee, and may depend on network conditions, the request volume for your domain, and other factors. Although we try to get the logs to you as fast as possible, we prioritize not losing log data over speed. On rare occasions, you may see a longer delay. In this case, you do not need to take any action. The logs will be available as soon as they are processed.
1917

2018
### Are logs available for customers who are not on an Enterprise plan?
2119

22-
Not yet, but we’re planning to make them available to other customer plans in the future.
20+
Not yet, but we are planning to make them available to other customer plans in the future.
2321

24-
### When pulling or pushing logs, I occasionally come across a time period with no data, even though I’m sure my domain received requests at that time. Is this normal?
22+
### When pulling or pushing logs, I occasionally come across a time period with no data, even though I am sure my domain received requests at that time. Is this normal?
2523

26-
Yes, this is normal. The time period for which you pull or receive logs is based on our processing time, not the time the requests passed through our network. If you receive an empty response, it does not mean there were no requests during that time period. It just means we did not process any logs for your domain during that time.
24+
Yes. The time period for which you pull or receive logs is based on our processing time, not the time the requests passed through our network. Empty responses do not mean there were no requests during that time period, just that we did not process any logs for your domain during that time.
2725

2826
### Can I receive logs in a format other than JSON?
2927

30-
Currently not. Talk to your account manager or Cloudflare Support if you’re interested in other formats and we’ll consider them for the future.
28+
Not at this time. Talk to your account manager or Cloudflare Support if you are interested in other formats and we will consider them for the future.
3129

3230
## Logpush FAQ
3331

3432
### What happens if my cloud storage destination is temporarily unavailable?
3533

36-
**Logpush** is designed to retry in case of errors. If your destination is temporarily unavailable, we’ll keep trying until it’s online again and the logs are received. We’ll also automatically catch up, so that you don’t miss any logs. However, if we persistently receive errors from your destination, we’ll take that as a sign that it’s permanently unavailable and disable your push job. It can always be re-enabled later.
34+
**Logpush** is designed to retry in case of errors. If your destination is temporarily unavailable, Logpush will make the best effort to retry. If Cloudflare persistently receives errors from your destination, Logpush will eventually drop logs. If the errors continue for a prolonged period of time, Logpush will assume that the destination is permanently unavailable and disable your push job. You can always re-enable the job later.
3735

3836
### Can I adjust how often logs are pushed?
3937

4038
No. Cloudflare pushes logs in batches as soon as possible.
4139

42-
### My job was accidentally turned off, and I didn’t receive my logs for a certain time period. Can they still be pushed to me?
40+
### My job was accidentally turned off, and I did not receive my logs for a certain time period. Can they still be pushed to me?
41+
42+
No. **Logpush** only pushes the logs once as they become available and is unable to backfill. However, the logs are stored for at least 72 hours and can be downloaded using the **Logpull API**.
43+
44+
### Why am I receiving a validating destination error while setting up a Splunk job?
45+
You could be seeing this error for multiple reasons:
46+
* The Splunk endpoint URL is not correct. Cloudflare only supports Splunk HEC raw endpoint over HTTPS.
47+
* The Splunk authentication token is not correct. Be sure to URL-encode the token. For example, use "%20" for a whitespace.
48+
* The certificate for Splunk Server is not properly configured. Certificates generated by Splunk/third-party certificates should have the Common Name field in the certificate match the Splunk server’s domain name. Otherwise you may see errors like: `x509: certificate is valid for SplunkServerDefaultCert, not <your-instance>.splunkcloud.com.`
49+
50+
### What is the insecure-skip-verify parameter in Splunk jobs?
51+
This flag, if set to `true`, makes an insecure connection to Splunk. Setting this value to ``true`` is equivalent to using the `-k` option with ``curl`` as shown in Splunk examples and is **not** recommended. Cloudflare highly recommends setting this flag to ``false` when using the `insecure-skip-verify` parameter.
52+
53+
### Why do we have the insecure-skip-verify parameter in Splunk jobs if it is not recommended?
54+
Certificates generated by Splunk/third-party certificates should have the Common Name field in the certificate match the Splunk server’s domain name. Otherwise you may see errors like: `x509: certificate is valid for SplunkServerDefaultCert, not <your-instance>.splunkcloud.com.` This happens especially with the default certificates generated by Splunk on startup. Pushes will never succeed unless the certificates are fixed.
55+
56+
The proper way to resolve the issue is to fix the certificates. This flag is only here for those rare scenarios when it is not possible to have access or permissions to fix the certificates, like with the Splunk cloud instances, which don’t allow changing Splunk server configurations.
57+
58+
### How can I verify that my Splunk HEC is working correctly before setting up a job?
59+
Ensure that you can publish events to your Splunk instance through ``curl`` without the `-k` flag and with the `insecure-skip-verify` parameter set to ``false`` as in the following example:
60+
61+
```bash
62+
curl "https://<SPLUNK-ENDPOINT-URL>?channel=<SPLUNK-CHANNEL-ID>&insecure-skip-verify=<INSECURE-SKIP-VERIFY>&sourcetype=<SOURCE-TYPE>" \
63+
-H "Authorization: Splunk <SPLUNK-AUTH-TOKEN>" \
64+
-d '{"BotScore":99,"BotScoreSrc":"Machine Learning","CacheCacheStatus":"miss","CacheResponseBytes":2478}'
65+
{"text":"Success","code":0}
66+
```
67+
68+
### Can I use any HEC network port in the Splunk destination conf?
69+
No. Cloudflare expects the HEC network port to be configured to `:443` or `:8088`.
4370

44-
No, **Logpush** only pushes the logs once as they become available, and is unable to backfill. However, the logs are stored for a period of at least 72 hours and can be downloaded using the **Logpull API**.
71+
### Does Logpush integrate with the Cloudflare Splunk App?
72+
Yes. See [Cloudflare App for Splunk](https://splunkbase.splunk.com/app/4501/) for more information. As long as you ingest logs using the `cloudflare:json` source type, you can use the Cloudflare Splunk App.
4573

4674
## Logpull API FAQ
4775

4876
### How long are logs retained?
4977

50-
Cloudflare makes logs available for at least 3 days and up to 7 days. If you need your logs for a longer time period, download and store them locally.
78+
Cloudflare makes logs available for at least three days and up to seven days. If you need your logs for a longer time period, download and store them locally.
5179

52-
### I’m asking for logs for the time window of 16:10-16:13. However, the timestamps in the logs show requests that are before this time period. Why does that happen?
80+
### I am asking for logs for the time window of 16:10-16:13. However, the timestamps in the logs show requests that are before this time period. Why does that happen?
5381

5482
When you make a call for the time period of 16:10-16:13, you are actually asking for the logs that were received and processed by our system during that time (hence the endpoint name, _logs/received_). The _received_ time is the time the logs are written to disk. There is some delay between the time the request hits the Cloudflare edge and the time it is received and processed. The _request time_ is what you see in the log itself: _EdgeStartTimestamp_ and _EdgeEndTimestamp_ tell you when the edge started and stopped processing the request.
5583

56-
The advantage of basing the responses on the _time received_ rather than the request or edge time is not needing to worry about late-arriving logs. As long as you are calling our API for continuous time segments, you will always get all of your logs without making duplicate calls. If we based the response on request time, you could never be sure that all the logs for that request time had been processed.
84+
The advantage of basing the responses on the _time received_ rather than the request or edge time is not needing to worry about late-arriving logs. As long as you are calling our API for continuous time segments, you will always get all of your logs without making duplicate calls. If we based the response on request time, you could never be sure that all the logs for that request time had been processed.

products/logs/src/content/get-started/enable-destinations/datadog/index.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ To set up a Datadog Logpush job:
2020

2121
<Aside type="note" header="Note">
2222

23-
Note: Unlike configuring Logpush jobs for AWS S3, GCS, or Azure, there is no ownership challenge when configuring Logpush to Datadog.
23+
Unlike configuring Logpush jobs for AWS S3, GCS, or Azure, there is no ownership challenge when configuring Logpush to Datadog.
2424

2525
</Aside>
2626

@@ -104,8 +104,8 @@ Response:
104104
"success": true
105105
}
106106
```
107+
<Aside type="note" header="Note">
107108

108-
## Troubleshooting
109+
The Datadog destination is exclusive to new jobs and might not be backward compatible with older jobs. Create new jobs if you expect to send your logs directly to Datadog instead of modifying already existing ones. If you try to modify an existing job for another destination to push logs to Datadog, you may observe errors.
109110

110-
### I am observing errors pushing to Datadog after I modify an existing job for another destination to push logs to Datadog.
111-
Datadog destination is exclusive to new jobs and might not be backward compatible with older jobs. Create new jobs if you expect to send your logs directly to Datadog instead of modifying already existing ones.
111+
</Aside>

0 commit comments

Comments
 (0)