This guide explains what your Kubernetes cluster can do and provides practical examples of deploying applications.
Your k3s cluster consists of:
- Control Plane (192.168.68.54): The "brain" that manages the cluster, schedules workloads, and maintains desired state
- Worker Nodes: The "muscles" that actually run your applications (containers/pods)
The control plane decides which worker runs which application. You tell the control plane what you want, and it makes it happen.
- Pod: The smallest unit - one or more containers running together
- Deployment: Manages multiple identical pods (replicas) for high availability
- Service: Provides a stable network endpoint to access pods
- Namespace: Logical isolation for organizing resources
All commands should be run from the control plane:
ssh core@192.168.68.54Deploy nginx across your workers:
# Create a deployment with 3 replicas
sudo k3s kubectl create deployment web --image=nginx --replicas=3
# Check which workers are running the pods
sudo k3s kubectl get pods -o wide
# Expected output:
# NAME READY STATUS NODE
# web-xxxx-aaa 1/1 Running k3s-worker-1
# web-xxxx-bbb 1/1 Running k3s-worker-2
# web-xxxx-ccc 1/1 Running k3s-worker-1
# Expose the deployment
sudo k3s kubectl expose deployment web --port=80 --type=NodePort
# Get the assigned port
sudo k3s kubectl get service web
# Access it: http://192.168.68.54:<NodePort>Run PostgreSQL on a worker:
# Create a PostgreSQL deployment
sudo k3s kubectl create deployment postgres \
--image=postgres:15 \
-- -e POSTGRES_PASSWORD=mypassword
# Check it's running
sudo k3s kubectl get pods -l app=postgres -o wide
# Access the database
POD_NAME=$(sudo k3s kubectl get pods -l app=postgres -o jsonpath='{.items[0].metadata.name}')
sudo k3s kubectl exec -it $POD_NAME -- psql -U postgresDeploy a complete web application with frontend, backend, and database:
# Deploy PostgreSQL database
sudo k3s kubectl create deployment db \
--image=postgres:15
# Deploy a backend API (example: a simple API)
sudo k3s kubectl create deployment api \
--image=traefik/whoami \
--replicas=2
# Expose the API internally
sudo k3s kubectl expose deployment api --port=80 --name=api-service
# Deploy a frontend
sudo k3s kubectl create deployment frontend \
--image=nginx \
--replicas=2
# Expose frontend to outside world
sudo k3s kubectl expose deployment frontend \
--port=80 \
--type=NodePort \
--name=frontend-service
# See the entire stack
sudo k3s kubectl get allDeploy common homelab applications:
# Create a namespace for network services
sudo k3s kubectl create namespace network
# Deploy Pi-hole
cat <<EOF | sudo k3s kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: pihole
namespace: network
spec:
replicas: 1
selector:
matchLabels:
app: pihole
template:
metadata:
labels:
app: pihole
spec:
containers:
- name: pihole
image: pihole/pihole:latest
env:
- name: TZ
value: "America/New_York"
- name: WEBPASSWORD
value: "admin"
ports:
- containerPort: 80
- containerPort: 53
---
apiVersion: v1
kind: Service
metadata:
name: pihole
namespace: network
spec:
type: NodePort
selector:
app: pihole
ports:
- name: web
port: 80
targetPort: 80
- name: dns
port: 53
targetPort: 53
EOF
# Get the web interface port
sudo k3s kubectl get service pihole -n network
# Access: http://192.168.68.54:<NodePort>See how Kubernetes distributes load across workers:
# Deploy 5 replicas
sudo k3s kubectl create deployment loadtest \
--image=traefik/whoami \
--replicas=5
# Expose it
sudo k3s kubectl expose deployment loadtest \
--port=80 \
--type=NodePort
# Check distribution across workers
sudo k3s kubectl get pods -l app=loadtest -o wide
# You'll see pods spread across your workers
# Worker-1: 3 pods
# Worker-2: 2 podsDemonstrate automatic pod recovery:
# Deploy with 3 replicas
sudo k3s kubectl create deployment ha-demo \
--image=nginx \
--replicas=3
# Watch the pods
sudo k3s kubectl get pods -l app=ha-demo -o wide -w
# In another terminal, delete a pod
POD_NAME=$(sudo k3s kubectl get pods -l app=ha-demo -o jsonpath='{.items[0].metadata.name}')
sudo k3s kubectl delete pod $POD_NAME
# Watch as Kubernetes automatically creates a replacement pod!
# This is the "self-healing" feature of Kubernetes# Scale up to 5 replicas
sudo k3s kubectl scale deployment web --replicas=5
# Scale down to 2 replicas
sudo k3s kubectl scale deployment web --replicas=2# Update to a new image version
sudo k3s kubectl set image deployment/web nginx=nginx:alpine
# Kubernetes performs a rolling update - old pods are replaced one at a time
# Zero downtime!# Get logs from a specific pod
sudo k3s kubectl logs <pod-name>
# Follow logs in real-time
sudo k3s kubectl logs -f <pod-name>
# Get logs from all pods in a deployment
sudo k3s kubectl logs -l app=web --tail=20# Describe a pod to see events and status
sudo k3s kubectl describe pod <pod-name>
# Execute commands inside a pod
sudo k3s kubectl exec -it <pod-name> -- /bin/bash
# Check resource usage
sudo k3s kubectl top nodes
sudo k3s kubectl top pods# Delete a deployment (and its pods)
sudo k3s kubectl delete deployment web
# Delete a service
sudo k3s kubectl delete service web
# Delete everything in a namespace
sudo k3s kubectl delete all --all -n network- Scenario: Run dev, staging, and production environments on separate workers
- Workers: 3+ workers, each hosting a full application stack
- Benefits: Isolated environments, easy to reset, version control your configs
- Services: Plex/Jellyfin (media), Sonarr/Radarr (management), Transmission (downloads)
- Workers: Dedicate one worker with lots of storage for media
- Benefits: All services managed by Kubernetes, automatic restarts
- Services: Nextcloud (files), Gitea (git), Bitwarden (passwords), Uptime Kuma (monitoring)
- Workers: Spread across workers for redundancy
- Benefits: One cluster manages all your self-hosted services
- Services: GitLab or Jenkins runners on workers
- Workers: Dedicated build workers with lots of CPU
- Benefits: Parallel builds, automatic scaling during high load
- Scenario: Learn Docker, Kubernetes, DevOps practices
- Workers: Experiment without fear - tear down and rebuild easily
- Benefits: Safe sandbox, matches production Kubernetes patterns
- High Availability: If one worker fails, pods automatically move to healthy workers
- Load Distribution: Spread CPU/memory intensive apps across multiple machines
- Resource Isolation: Keep incompatible or resource-hungry apps on separate workers
- Scaling: Add more workers to handle more applications
- Rolling Updates: Update workers one at a time without downtime
- Learn kubectl:
sudo k3s kubectl --help - Explore Helm: Package manager for Kubernetes applications
- Try Persistent Storage: Learn about PersistentVolumes for databases
- Set up Ingress: Use a reverse proxy (like Traefik or nginx-ingress) for routing
- Monitor Your Cluster: Deploy Prometheus + Grafana for metrics