You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/content/docs/cloudflare-one/connections/connect-networks/deployment-guides/kubernetes.mdx
+22-15Lines changed: 22 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,22 +7,39 @@ sidebar:
7
7
8
8
[Kubernetes](https://kubernetes.io/) is a container orchestration tool that helps deploy applications onto physical or virtual machines, scale the deployment to meet traffic demands, and push updates without downtime. The Kubernetes cluster, or environment, where the application instances are running is connected internally through a private network. You can install the `cloudflared` daemon inside of the Kubernetes cluster in order to connect applications inside of the cluster to Cloudflare.
9
9
10
+
This tutorial will cover how to expose a Kubernetes service to the public Internet using `cloudflared`. For the purposes of this example, we will deploy a basic web application alongside `cloudflared` in Google Kubernetes Engine (GKE). The same principles apply to any other Kubernetes environment (such as `minikube`, `kubeadm`, or a cloud-based Kubernetes service) where `cloudflared` can connect to Cloudflare's network.
As shown in the diagram, we recommend setting up `cloudflared` as an adjacent deployment to the application deployments. Having a separate deployment for `cloudflared` allows you to scale `cloudflared`up or down independently of the application. When incoming traffic increases, Kubernetes can spin up [multiple replicas](/cloudflare-one/connections/connect-networks/configure-tunnels/tunnel-availability/)of `cloudflared`running the same Cloudflare Tunnel. Each `cloudflared` replica / pod can reach all Kubernetes services in the cluster -- there is no need to build a dedicated tunnel for each service.
16
+
As shown in the diagram, we recommend setting up `cloudflared` as an adjacent deployment to the application deployments. Having a separate Kubernetes deployment for `cloudflared` allows you to scale `cloudflared` independently of the application. In the `cloudflared` deployment, you can spin up [multiple replicas](/cloudflare-one/connections/connect-networks/configure-tunnels/tunnel-availability/) running the same Cloudflare Tunnel -- there is no need to build a dedicated tunnel for each pod. Each `cloudflared` replica / pod can reach all Kubernetes services in the cluster.
13
17
14
-
Once the cluster is connected to Cloudflare, you can configure Cloudflare Zero Trust to control how `cloudflared` will proxy traffic to services within the cluster. For example, you may wish to publish certain Kubernetes application to the Internet and restrict other applications to internal WARP client users.
18
+
:::note
19
+
We do not recommend using `cloudflared` in autoscaling setups because downscaling (removing replicas) will break any existing user connections to that replica. Additionally, `cloudflared` does not load balance across replicas; replicas are strictly for high availability. To load balance traffic to your nodes, you can use [Cloudflare Load Balancer](/load-balancing/private-network/) or a third-party load balancer.
20
+
:::
15
21
16
-
This tutorial will cover how to expose a Kubernetes service to the public Internet using `cloudflared`. For the purposes of this example, we will deploy a basic web application alongside `cloudflared` in Google Kubernetes Engine (GKE). The same principles apply to any other Kubernetes environment (such as `minikube`, `kubeadm`, or a cloud-based Kubernetes service) where `cloudflared` can connect to Cloudflare's network.
22
+
Once the cluster is connected to Cloudflare, you can configure Cloudflare Tunnel routes to control how `cloudflared` will proxy traffic to services within the cluster. For example, you may wish to publish certain Kubernetes application to the Internet and restrict other applications to internal WARP client users.
23
+
24
+
## Prerequisites
25
+
26
+
## 1. Create a GKE cluster
27
+
28
+
29
+
- Install the [gcloud CLI](https://cloud.google.com/sdk/docs/install) and [kubectl CLI](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl).
30
+
- In the GCP console create a new Kubernetes cluster.
31
+
- In order to connect to the cluster, select the three dots and then connect from the drop down.
32
+
- Copy the command that appears and paste it into your local terminal.
33
+
34
+
35
+
## Set up the web app
17
36
18
37
## Create a tunnel
19
38
20
39
Applications must be packaged into a containerized image, such as a Docker image, before you can run it in Kubernetes. Kubernetes uses the image to spin up multiple instances of the application.
21
40
22
41
## Store the tunnel token
23
42
24
-
## Set up the web app
25
-
26
43
## Install and run the tunnel
27
44
28
45
## Verify tunnel status
@@ -33,16 +50,6 @@ Applications must be packaged into a containerized image, such as a Docker image
33
50
34
51
35
52
36
-
## Creating the Kubernetes Cluster
37
-
38
-
This guide will use a Google managed Kubernetes GKE.
39
-
To get started, perform the following steps:
40
-
41
-
- Install the [gcloud CLI](https://cloud.google.com/sdk/docs/install) and [kubectl CLI](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl).
42
-
- In the GCP console create a new Kubernetes cluster.
43
-
- In order to connect to the cluster, select the three dots and then connect from the drop down.
44
-
- Copy the command that appears and paste it into your local terminal.
45
-
46
53
## Creating the Pods
47
54
48
55
A pod is the basic deployable object that Kubernetes creates. It represents an instance of a running process in the cluster. The following .yml file ( httpbin-app.yml) will create a pod that contains the httpbin application. It will create two replicas so as to prevent any downtime. The application will be accessible inside the cluster at web-service:80.
0 commit comments