|
| 1 | +## How to upgrade helm |
| 2 | +``` |
| 3 | +// from the root (`charts/` subdirectory) |
| 4 | +> helm upgrade k8s-controller-helm ./chart -f chart/values.yaml |
| 5 | +
|
| 6 | +$ cd chart |
| 7 | +> helm upgrade k8s-controller-helm . -f values.yaml |
| 8 | +``` |
| 9 | + |
| 10 | +* `k8s-controller-helm`: your release name (from helm list) |
| 11 | +* `./chart`: path to your Helm chart (adjust if it’s in a different directory) |
| 12 | +* `-f values.yaml`: apply the updated values file |
| 13 | + |
| 14 | +## Dry run |
| 15 | +``` |
| 16 | +helm upgrade k8s-controller-helm ./chart -f values.yaml --dry-run --debug |
| 17 | +``` |
| 18 | + |
| 19 | +## If you have some tag for your docker image(like 1.0.2) |
| 20 | +So if you do not mention the "tag" then the helm will take the appVersion as tag for that image |
| 21 | +Chart.yaml has `appVersion: 1.16.0` (`Pulling image "manzilrahul/k8s-custom-controller:1.16.0"`) |
| 22 | + |
| 23 | +If you checked the `template/deployment.yaml` you will find out it have something like this |
| 24 | +`image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"` |
| 25 | +then you can change this to only look for the `.Values.image.tag` and remove even for the default |
| 26 | + |
| 27 | +```yaml |
| 28 | +# values.yaml |
| 29 | +image: |
| 30 | + repository: manzilrahul/k8s-custom-controller |
| 31 | + tag: "1.0.3" |
| 32 | +``` |
| 33 | +After changing this you can do again `helm upgrade k8s-controller-helm ./chart -f chart/values.yaml` and |
| 34 | +this will or before applying this you can check by dry-run `helm upgrade k8s-controller-helm ./chart -f chart/values.yaml --dry-run --debug` |
| 35 | +```md |
| 36 | +* Re-render the templates using the updated logic in deployment.yaml |
| 37 | +* Use the new values (e.g., image tag 1.0.3) from values.yaml |
| 38 | +* Apply the changes to your running release (k8s-controller-helm) |
| 39 | +``` |
| 40 | + |
| 41 | +> | Note |
| 42 | +Till now we are thinking something like we are running some application to some `:port` but we do not have that |
| 43 | +So you will get some error like that( as we expose on :8000 - Dockerfile) |
| 44 | + |
| 45 | +```md |
| 46 | +Readiness probe failed: Get "http://10.42.0.14:8000/": dial tcp 10.42.0.14:8000: connect: connection refused |
| 47 | +
|
| 48 | +# Your container is starting, but your app inside is not listening on port 8000, or it takes time to start and the probe fails before it's ready. |
| 49 | +``` |
| 50 | +❓Does main.go start an HTTP server on port 8000? |
| 51 | +> If not, the **readiness probe** will fail because there's nothing listening at that port. |
| 52 | + |
| 53 | +✅ 2. Probes: Check Your deployment.yaml |
| 54 | +```yaml |
| 55 | +readinessProbe: |
| 56 | + httpGet: |
| 57 | + path: / |
| 58 | + port: 8000 |
| 59 | + initialDelaySeconds: 5 |
| 60 | + periodSeconds: 10 |
| 61 | +
|
| 62 | +livenessProbe: |
| 63 | + httpGet: |
| 64 | + path: / |
| 65 | + port: 8000 |
| 66 | + initialDelaySeconds: 10 |
| 67 | + periodSeconds: 20 |
| 68 | +
|
| 69 | +--- |
| 70 | +# These will fail if your app either: |
| 71 | +# 1). Isn’t listening on port 8000 |
| 72 | +# 2). Doesn’t serve / |
| 73 | +# 3). Takes longer to start |
| 74 | +``` |
| 75 | + |
| 76 | +* So main.go does not start an HTTP server at all, which is why our container is failing health checks: |
| 77 | +```md |
| 78 | +Readiness probe failed: dial tcp 10.42.0.14:8000: connect: connection refused |
| 79 | +``` |
| 80 | +> We're just running a controller loop, not a web service — so the probes expecting a web server on port 8000 will always fail. |
| 81 | + |
| 82 | +> Option 1: Remove the liveness/readiness probes |
| 83 | +```yaml |
| 84 | +# hart/templates/deployment.yaml, remove or comment out: |
| 85 | +livenessProbe: |
| 86 | + httpGet: |
| 87 | + path: / |
| 88 | + port: 8000 |
| 89 | +
|
| 90 | +readinessProbe: |
| 91 | + httpGet: |
| 92 | + path: / |
| 93 | + port: 8000 |
| 94 | +
|
| 95 | +# then upgrade : `helm upgrade k8s-controller-helm ./chart -f chart/values.yaml` |
| 96 | +``` |
| 97 | +> Option 2: Add a dummy HTTP health endpoint (optional) ( I am not going to do so) |
| 98 | +
|
| 99 | +``` |
| 100 | +// main.go |
| 101 | +go func() { |
| 102 | + http.HandleFunc("/healthz", func(w http.ResponseWriter, r *http.Request) { |
| 103 | + w.Write([]byte("ok")) |
| 104 | + }) |
| 105 | + log.Fatal(http.ListenAndServe(":8000", nil)) |
| 106 | +}() |
| 107 | +
|
| 108 | +> and then do this |
| 109 | +
|
| 110 | +readinessProbe: |
| 111 | + httpGet: |
| 112 | + path: /healthz |
| 113 | + port: 8000 |
| 114 | +livenessProbe: |
| 115 | + httpGet: |
| 116 | + path: /healthz |
| 117 | + port: 8000 |
| 118 | +
|
| 119 | +``` |
| 120 | + |
| 121 | +# Lets look pod describe events |
| 122 | +```md |
| 123 | +Events: |
| 124 | +Type Reason Age From Message |
| 125 | +--- |
| 126 | +Normal Scheduled 30s default-scheduler Successfully assigned default/k8s-controller-helm-chart-578458f6d4-vpdnd to k3d-rahulxf-server-0 |
| 127 | +Normal Pulled 29s kubelet Container image "manzilrahul/k8s-custom-controller:1.0.0" already present on machine |
| 128 | +Normal Created 29s kubelet Created container chart |
| 129 | +Normal Started 29s kubelet Started container chart |
| 130 | + |
| 131 | +how to access it ?? or check is it running I have some logs in my application I mean i want to check how i can see |
| 132 | + |
| 133 | +``` |
| 134 | + |
| 135 | +# Let View Logs |
| 136 | +`kubectl logs -f <pod-name>` |
| 137 | +So here we get the problem for not reaching the k8s api as we are using the `network --host` in our Dockerfile |
| 138 | + |
| 139 | +``` |
| 140 | +As we do for running |
| 141 | +docker run --rm --name k8s-custom-controller \ |
| 142 | + --network host \ |
| 143 | + -v $HOME/.kube:/root/.kube \ |
| 144 | + manzilrahul/k8s-custom-controller:1.0.0 |
| 145 | +
|
| 146 | +--- |
| 147 | +
|
| 148 | +# ok let me ask how will i add this to helm ? |
| 149 | +--network host \ |
| 150 | + -v $HOME/.kube:/root/.kube \ |
| 151 | +
|
| 152 | +``` |
| 153 | +Since we're accessing the Kubernetes API from inside the cluster, we do NOT need to use `--network host` or `-v ~/.kube:/root/.kube`. Instead, |
| 154 | +we can `Use the In-Cluster Kubernetes API` |
| 155 | +```go |
| 156 | +import "k8s.io/client-go/rest" |
| 157 | + |
| 158 | +config, err := rest.InClusterConfig() |
| 159 | + |
| 160 | +// If we're using GetClientSetWithContext() or something similar and trying to use the kubeconfig file (~/.kube/config), that’s only needed for local dev (like Docker or Compose). |
| 161 | + |
| 162 | + |
| 163 | +``` |
| 164 | + |
| 165 | +# Access Kubernetes API Securely in Kubernetes |
| 166 | +```md |
| 167 | +When your pod runs in Kubernetes: |
| 168 | +It gets a ServiceAccount token automatically. |
| 169 | +The Go client (InClusterConfig) reads this from: |
| 170 | +/var/run/secrets/kubernetes.io/serviceaccount/token |
| 171 | +And connects to https://kubernetes.default.svc |
| 172 | + |
| 173 | +You do not need: |
| 174 | +--network host |
| 175 | +-v ~/.kube |
| 176 | +``` |
| 177 | +> Note: if you get some error something like this, just check the "CONTEXT" env first |
| 178 | +```md |
| 179 | +docker run --rm --name k8s-custom-controller manzilrahul/k8s-custom-controller |
| 180 | +2025/05/17 09:10:49 .env file not found, assuming environment variables are set |
| 181 | + |
| 182 | + |
| 183 | +E0517 09:10:49.942867 1 reflector.go:628] "Observed a panic" panic="runtime error: invalid memory address or nil pointer dereference" panicGoValue="\"invalid memory address or nil pointer dereference\"" stacktrace=< |
| 184 | + |
| 185 | +or run like this |
| 186 | +docker run --rm --name k8s-custom-controller \ |
| 187 | +-v $HOME/.kube:/root/.kube \ |
| 188 | +-e KUBECONFIG=/root/.kube/config \ |
| 189 | +-e CONTEXT="your-context-name" \ |
| 190 | +manzilrahul/k8s-custom-controller |
| 191 | +``` |
| 192 | + |
| 193 | +# Lets tackle this error message |
| 194 | + |
| 195 | +```md |
| 196 | +Name: metrics-server |
| 197 | +ADDED: Name=metrics-server, Namespace=kube-system, UID=14763af6-3b26-4cbf-b080-2b34c696ea10, Created=2025-05-17 06:26:17 +0000 UTC |
| 198 | +Deployment Added: |
| 199 | +Name: traefik |
| 200 | +ADDED: Name=traefik, Namespace=kube-system, UID=b486cbae-7ad4-4ad1-8b89-0e1c4ac11944, Created=2025-05-17 06:26:56 +0000 UTC |
| 201 | +sync deployment, services is forbidden: User "system\:serviceaccount\:default\:k8s-controller" cannot create resource "services" in API group "" in the namespace "default" |
| 202 | +sync deployment, ingresses.networking.k8s.io is forbidden: User "system\:serviceaccount\:default\:k8s-controller" cannot create resource "ingresses" in API group "networking.k8s.io" in the namespace "default" |
| 203 | +sync deployment, services is forbidden: User "system\:serviceaccount\:default\:k8s-controller" cannot create resource "services" in API group "" in the namespace "default" |
| 204 | +sync deployment, ingresses.networking.k8s.io is forbidden: User "system\:serviceaccount\:default\:k8s-controller" cannot create resource "ingresses" in API group "networking.k8s.io" in the namespace "default" |
| 205 | +sync deployment, services is forbidden: User "system\:serviceaccount\:default\:k8s-controller" cannot create resource "services" in API group "" in the namespace "kube-system" |
| 206 | +sync deployment, ingresses.networking.k8s.io is forbidden: User "system\:serviceaccount\:default\:k8s-controller" cannot create resource "ingresses" in API group "networking.k8s.io" in the namespace "kube-system" |
| 207 | +sync deployment, services is forbidden: User "system\:serviceaccount\:default\:k8s-controller" cannot create resource "services" in API group "" in the namespace "kube-system" |
| 208 | +sync deployment, ingresses.networking.k8s.io is forbidden: User "system\:serviceaccount\:default\:k8s-controller" cannot create resource "ingresses" in API group "networking.k8s.io" in the namespace "kube-system" |
| 209 | +sync deployment, services is forbidden: User "system\:serviceaccount\:default\:k8s-controller" cannot create resource "services" in API group "" in the namespace "kube-system" |
| 210 | +sync deployment, ingresses.networking.k8s.io is forbidden: User "system\:serviceaccount\:default\:k8s-controller" cannot create resource "ingresses" in API group "networking.k8s.io" in the namespace "kube-system" |
| 211 | +sync deployment, services is forbidden: User "system\:serviceaccount\:default\:k8s-controller" cannot create resource "services" in API group "" in the namespace "kube-system" |
| 212 | +sync deployment, ingresses.networking.k8s.io is forbidden: User "system\:serviceaccount\:default\:k8s-controller" cannot create resource "ingresses" in API group "networking.k8s.io" in the namespace "kube-system" |
| 213 | +^C% |
| 214 | + |
| 215 | +Que. |
| 216 | +so almost done i guess , |
| 217 | +i think i need to install the ingress controller in my dockerfile to be supported this |
| 218 | +basically this k8s-controller is like when you create deployment in cluster it will create svc and ingress for you automatically |
| 219 | + |
| 220 | +Ans: |
| 221 | +> NOOOOOO |
| 222 | + it's failing due to missing RBAC permissions — not because of a missing ingress controller. |
| 223 | +``` |
| 224 | +```error |
| 225 | +User "system:serviceaccount:default:k8s-controller" cannot create resource "services" |
| 226 | +
|
| 227 | +❌ What's Missing: |
| 228 | +I haven't granted the "k8s-controller" permission to create "Services" and "Ingresses". |
| 229 | +``` |
| 230 | +# Add Proper RBAC (Role and RoleBinding) |
| 231 | + |
| 232 | +```yaml |
| 233 | +# templates/clusterrole.yaml |
| 234 | +apiVersion: rbac.authorization.k8s.io/v1 |
| 235 | +kind: ClusterRole |
| 236 | +metadata: |
| 237 | + name: {{ include "chart.fullname" . }} # k8s-controller |
| 238 | +rules: |
| 239 | + - apiGroups: ["apps"] |
| 240 | + resources: ["deployments"] |
| 241 | + verbs: ["get", "list", "watch"] |
| 242 | + - apiGroups: [""] |
| 243 | + resources: ["services"] |
| 244 | + verbs: ["get", "list", "watch", "create", "update"] |
| 245 | + - apiGroups: ["networking.k8s.io"] |
| 246 | + resources: ["ingresses"] |
| 247 | + verbs: ["get", "list", "watch", "create", "update"] |
| 248 | + |
| 249 | + |
| 250 | + # do it in separate file |
| 251 | +--- |
| 252 | +# templates/clusterrolebinding.yaml |
| 253 | +apiVersion: rbac.authorization.k8s.io/v1 |
| 254 | +kind: ClusterRoleBinding |
| 255 | +metadata: |
| 256 | + name: {{ include "chart.fullname" . }} |
| 257 | +subjects: |
| 258 | + - kind: ServiceAccount |
| 259 | + name: {{ include "chart.serviceAccountName" . }} |
| 260 | + namespace: {{ .Release.Namespace }} |
| 261 | +roleRef: |
| 262 | + kind: ClusterRole |
| 263 | + name: {{ include "chart.fullname" . }} |
| 264 | + apiGroup: rbac.authorization.k8s.io |
| 265 | + |
| 266 | +# Why this works: |
| 267 | + # 1. The ClusterRole gives access to Deployments, Services, and Ingresses. |
| 268 | + |
| 269 | + # 2. The ClusterRoleBinding binds it to the Helm-managed ServiceAccount dynamically. |
| 270 | + |
| 271 | + # 3. The Deployment uses this SA automatically via templating. |
| 272 | + |
| 273 | + # 4. The helpers and values.yaml ensure that everything can be overridden or extended safely. |
| 274 | + |
| 275 | + |
| 276 | +``` |
| 277 | + |
| 278 | +* 💡 Optional: Do You Need an Ingress Controller Installed? |
| 279 | + |
| 280 | +Yes — but only if you want those Ingress resources your controller creates to actually route traffic. |
| 281 | +If you're using k3d, you probably already have traefik installed by default (which is visible in your logs). |
| 282 | +So you don’t need to install anything extra unless you're planning to switch to NGINX or another ingress controller. |
| 283 | + |
| 284 | +### Ref: |
| 285 | +* Docs for the helm & inclusterConfig: |
| 286 | + - https://jhooq.com/building-first-helm-chart-with-spring-boot/ |
| 287 | + - https://jhooq.com/convert-kubernetes-yaml-into-helm/ |
| 288 | + - https://heidloff.net/article/accessing-kubernetes-from-go-applications/ |
| 289 | + - https://collabnix.github.io/kubelabs/golang/ |
| 290 | + |
| 291 | + |
0 commit comments