Skip to content

Huntarr on Raspberry k3s cluster #727

@HarishHary

Description

@HarishHary

Hello,

I am running the app in my k3s cluster. It works but ..... really slow and 0 responsiveness. Wonder why this is the case?

deployment.yml

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: huntarr
  namespace: mediarr
spec:
  revisionHistoryLimit: 3
  replicas: 1
  selector:
    matchLabels:
      app: huntarr
  template:
    metadata:
      labels:
        app: huntarr
    spec:
      containers:
        - name: huntarr
          image: "huntarr/huntarr:latest"
          imagePullPolicy: IfNotPresent
          env:
            - name: TZ
              value: UTC
            - name: PUID
              value: "1000"
            - name: PGID
              value: "1000"
            - name: HUNTARR_PORT
              value: "9705"
          ports:
            - name: http
              containerPort: 9705
              protocol: TCP
          # livenessProbe:
          #   httpGet:
          #     path: /
          #     port: 9705
          #   failureThreshold: 5
          #   initialDelaySeconds: 60
          #   periodSeconds: 10
          #   successThreshold: 1
          #   timeoutSeconds: 10
          # readinessProbe:
          #   httpGet:
          #     path: /
          #     port: 9705
          #   initialDelaySeconds: 30
          #   failureThreshold: 3
          #   timeoutSeconds: 1
          #   periodSeconds: 10
          # startupProbe:
          #   httpGet:
          #     path: /
          #     port: 9705
          #   initialDelaySeconds: 30
          #   failureThreshold: 30
          #   timeoutSeconds: 1
          #   periodSeconds: 5
          volumeMounts:
            - name: nfs-share
              mountPath: /config
              subPath: config/huntarr
      volumes:
        - name: nfs-share
          persistentVolumeClaim:
            claimName: nfs-share

httproute.yml

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: huntarr
  namespace: mediarr
spec:
  parentRefs:
    - name: traefik-gateway
      namespace: traefik
  hostnames:
    - huntarr.hsegar.com
  rules:
    - matches:
      - path:
          type: PathPrefix
          value: /
      backendRefs:
        - name: huntarr
          port: 9705

service.yml

---
apiVersion: v1
kind: Service
metadata:
  name: huntarr
  namespace: mediarr
spec:
  selector:
    app: huntarr
  type: ClusterIP
  ports:
    - port: 9705
      targetPort: 9705
      protocol: TCP
      name: http

My config file is available over NFS. On each node I have the following /etc/fstab available:

192.168.1.120:/mnt/share /mnt/share nfs defaults,nofail,noatime,nolock 0 0

persistant-volume.yaml

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-share
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 500Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    server: 192.168.1.120
    path: /mnt/share
  # hostPath:
  #   path: "/mnt/share"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-share
  namespace: mediarr
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: "500Gi"

Regards

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions