Skip to content
Merged
Show file tree
Hide file tree
Changes from 16 commits
Commits
Show all changes
24 commits
Select commit Hold shift + click to select a range
a54b73c
Kubernetes: refactor local cluster deployment
YuryHrytsuk Jul 28, 2025
3b2a5c9
Use newer version
YuryHrytsuk Jul 29, 2025
2b99236
Update install calico link
YuryHrytsuk Jul 29, 2025
eadd4a7
Add default global network policy
YuryHrytsuk Jul 29, 2025
b950879
Merge branch 'main' into local-kubernetes-use-calico
YuryHrytsuk Aug 4, 2025
79b333a
Global deny network policy
YuryHrytsuk Aug 4, 2025
3bcac1e
Merge branch 'local-kubernetes-use-calico' into kubernetes-add-deny-a…
YuryHrytsuk Aug 4, 2025
f4f980d
Report progress on waiting for calico to start
YuryHrytsuk Aug 4, 2025
c1a5ec4
Merge branch 'local-kubernetes-use-calico' into kubernetes-add-deny-a…
YuryHrytsuk Aug 4, 2025
943e9be
Merge remote-tracking branch 'upstream/main' into kubernetes-add-deny…
YuryHrytsuk Aug 4, 2025
cab6786
Update notes for calico configuration helm chart
YuryHrytsuk Aug 5, 2025
0130ae3
Update calico configuration readme
YuryHrytsuk Aug 5, 2025
28815f5
Fix typo
YuryHrytsuk Aug 5, 2025
dbd7fcc
Fix readme
YuryHrytsuk Aug 5, 2025
df6f591
Add missing longhorn ns
YuryHrytsuk Aug 5, 2025
952b976
Fix portainer values
YuryHrytsuk Aug 5, 2025
ea116a7
Allow public dns requests and imrpove calico config readme
YuryHrytsuk Aug 6, 2025
2c63548
Merge remote-tracking branch 'upstream/main' into kubernetes-add-deny…
YuryHrytsuk Aug 6, 2025
574c5bb
Document how to view network policies
YuryHrytsuk Aug 6, 2025
05adb57
Warn to restart pods to apply network policies
YuryHrytsuk Aug 6, 2025
b64c882
Automaticalla restart adminer pods on network policy change
YuryHrytsuk Aug 6, 2025
15336ba
Remove comment. It renders in final chart
YuryHrytsuk Aug 6, 2025
6c41bc4
Portainer: document lacking pod annotations and link PR
YuryHrytsuk Aug 6, 2025
4839a75
Document pod annotation checksum trick
YuryHrytsuk Aug 6, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions charts/.gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,3 +2,5 @@ values.yaml
values.*.yaml
k8s_hosts.ini
helmfile.y?ml

*.tgz
20 changes: 20 additions & 0 deletions charts/adminer/templates/networkpolicy.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: adminer-network-policy
labels:
{{- include "adminer.labels" . | nindent 4 }}
spec:
selector: app.kubernetes.io/instance == "{{ .Release.Name }}"
ingress:
- action: Allow
protocol: TCP
destination:
ports:
- {{ .Values.service.port }}
egress:
- action: Allow
protocol: TCP
destination:
ports:
- 5432
23 changes: 23 additions & 0 deletions charts/calico-configuration/.helmignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/
24 changes: 24 additions & 0 deletions charts/calico-configuration/Chart.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
apiVersion: v2
name: calico-configuration
description: A Helm chart for Kubernetes

# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application

# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.0.1

# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "3.26.4"
19 changes: 19 additions & 0 deletions charts/calico-configuration/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
## How to add network policy (local deployment)

How to discover ports / networks that are used by application
* enable and observe traffic via
- https://docs.tigera.io/calico/3.30/observability/enable-whisker
- https://docs.tigera.io/calico/3.30/observability/view-flow-logs
* add staged policies to make sure all cases are included https://docs.tigera.io/calico/3.30/network-policy/staged-network-policies
* transform staged policies to "normal" policies

## Debug network policies
* observe traffic and check `policies` field in whisker logs
- https://docs.tigera.io/calico/3.30/observability/enable-whisker
- https://docs.tigera.io/calico/3.30/observability/view-flow-logs

Warning: make sure that calico version being used support Whisker (first introduced in v3.30)

## Known issues

If network policy is created after pod, pod **MUST** be restarted for policy to take effect. Read more https://github.com/projectcalico/calico/issues/10753#issuecomment-3140717418
3 changes: 3 additions & 0 deletions charts/calico-configuration/templates/NOTES.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
This chart configures Calico but does not deploy Calico itself. Calico is deployed during the Kubernetes cluster creation.

Note: to make sure network policies are applied correctly, you may need to restart targeted application pods.
30 changes: 30 additions & 0 deletions charts/calico-configuration/templates/globalpolicy.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
# Source: https://docs.tigera.io/calico/3.30/network-policy/get-started/kubernetes-default-deny
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: default-global-deny-network-policy
spec:
# "kube-public", "kube-system", "kube-node-lease" -- system namespaces
# "calico-system", "calico-apiserver", "tigera-operator" -- calico namespaces (when installed via scripts [local deployment])
# TODO: other namespaces are to be removed from this list (once appropriate network policies are created)
namespaceSelector:
kubernetes.io/metadata.name not in {"kube-public", "kube-system", "kube-node-lease", "calico-system", "calico-apiserver", "tigera-operator", "simcore", "cert-manager", "reflector", "traefik", "victoria-logs", "csi-s3", "portainer", "topolvm", "local-path-storage", "longhorn"}
types:
- Ingress
- Egress
egress:
# allow all namespaces to communicate to DNS pods
# this will also apply to pods that have network policy defined
# so that we don't need to define DNS policy for each pod
- action: Allow
protocol: UDP
destination:
selector: 'k8s-app == "kube-dns"'
ports:
- 53
- action: Allow
protocol: TCP
destination:
selector: 'k8s-app == "kube-dns"'
ports:
- 53
6 changes: 6 additions & 0 deletions charts/portainer/Chart.lock
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
dependencies:
- name: portainer
repository: https://portainer.github.io/k8s/
version: 1.0.54
digest: sha256:bafe4182881aee8c6df3d3c6f8c523a1bd7577bed04942ad3d9b857a5437d96f
generated: "2025-07-29T11:07:15.39037387+02:00"
29 changes: 29 additions & 0 deletions charts/portainer/Chart.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
apiVersion: v2
name: portainer
description: A Helm chart for Kubernetes

# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application

# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 1.0.54

# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: 2.21.2

dependencies:
- name: portainer
version: 1.0.54
repository: "https://portainer.github.io/k8s/"
1 change: 1 addition & 0 deletions charts/portainer/templates/NOTES.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Wrapper around portainer helm chart https://github.com/portainer/k8s
36 changes: 36 additions & 0 deletions charts/portainer/templates/networkpolicy.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: portainer-network-policy
spec:
selector: app.kubernetes.io/instance == "portainer"
types:
- Ingress
- Egress
egress:
- action: Allow
protocol: TCP
# connect to the Kubernetes API server
destination:
ports:
- 6443
nets:
- 10.0.0.0/8
- 172.16.0.0/12
- 192.168.0.0/16
# coredns 53 allow from kube system
- action: Allow
protocol: UDP
destination:
# `selector: 'k8s-app == "kube-dns"'` does not work (so global policy default dns allow does not work)
# manually allow dns and use different selector that works.
selectorNamespace: kubernetes.io/metadata.name == "kube-system"
ports:
- 53
ingress:
- action: Allow
# allow traffic to portainer GUI
protocol: TCP
destination:
ports:
- {{ .Values.servicePort }}
9 changes: 5 additions & 4 deletions charts/portainer/values.ebs-pv.yaml.gotmpl
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
persistence:
enabled: true
size: "1Gi" # minimal size for gp3 is 1Gi
storageClass: "{{ .Values.ebsStorageClassName }}"
portainer:
persistence:
enabled: true
size: "1Gi" # minimal size for gp3 is 1Gi
storageClass: "{{ .Values.ebsStorageClassName }}"
9 changes: 5 additions & 4 deletions charts/portainer/values.longhorn-pv.yaml.gotmpl
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
persistence:
enabled: true
size: "300Mi" # cannot be lower https://github.com/longhorn/longhorn/issues/8488
storageClass: "{{ .Values.longhornStorageClassName }}"
portainer:
persistence:
enabled: true
size: "300Mi" # cannot be lower https://github.com/longhorn/longhorn/issues/8488
storageClass: "{{ .Values.longhornStorageClassName }}"
9 changes: 5 additions & 4 deletions charts/portainer/values.s3-pv.yaml.gotmpl
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
persistence:
enabled: true
size: "1Gi"
storageClass: "csi-s3"
portainer:
persistence:
enabled: true
size: "1Gi"
storageClass: "csi-s3"
113 changes: 56 additions & 57 deletions charts/portainer/values.yaml.gotmpl
Original file line number Diff line number Diff line change
@@ -1,69 +1,68 @@
# Default values for adminer.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
servicePort: &servicePort 9000

replicaCount: 1
portainer:
replicaCount: 1

image:
repository: portainer/portainer-ce
pullPolicy: IfNotPresent
image:
repository: portainer/portainer-ce
pullPolicy: IfNotPresent

imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""

serviceAccount:
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: portainer-sa-clusteradmin
serviceAccount:
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: portainer-sa-clusteradmin

persistence: {}
persistence: {}

podAnnotations: {}
podLabels: {}
podAnnotations: {}
podLabels: {}

podSecurityContext:
{}
podSecurityContext:
{}

securityContext:
{}
securityContext:
{}

service:
type: "ClusterIP"
port: 9000
service:
type: "ClusterIP"
port: *servicePort

ingress:
enabled: true
className: ""
annotations:
namespace: {{ .Release.Namespace }}
cert-manager.io/cluster-issuer: "cert-issuer"
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: traefik-traefik-basic-auth@kubernetescrd,traefik-portainer-strip-prefix@kubernetescrd # namespace + middleware name
tls:
- hosts:
- {{ requiredEnv "K8S_MONITORING_FQDN" }}
secretName: monitoring-tls
hosts:
- host: {{ requiredEnv "K8S_MONITORING_FQDN" }}
paths:
- path: /portainer
pathType: Prefix
backend:
service:
name: portainer
port:
number: 9000
ingress:
enabled: true
className: ""
annotations:
namespace: {{ .Release.Namespace }}
cert-manager.io/cluster-issuer: "cert-issuer"
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: traefik-traefik-basic-auth@kubernetescrd,traefik-portainer-strip-prefix@kubernetescrd # namespace + middleware name
tls:
- hosts:
- {{ requiredEnv "K8S_MONITORING_FQDN" }}
secretName: monitoring-tls
hosts:
- host: {{ requiredEnv "K8S_MONITORING_FQDN" }}
paths:
- path: /portainer
pathType: Prefix
backend:
service:
name: portainer
port:
number: *servicePort

resources:
limits:
cpu: 2
memory: 1024Mi
requests:
cpu: 0.1
memory: 128Mi
resources:
limits:
cpu: 2
memory: 1024Mi
requests:
cpu: 0.1
memory: 128Mi

nodeSelector:
ops: "true"
nodeSelector:
ops: "true"
Loading