|
| 1 | +--- |
| 2 | +title: Adapt your Inotify parameters for your deployments |
| 3 | +excerpt: 'Adapt your Inotify parameters for your deployments which needs to have specific Inotify parameters' |
| 4 | +updated: 2025-01-09 |
| 5 | +--- |
| 6 | + |
| 7 | +## Objective |
| 8 | + |
| 9 | +The OVHcloud Managed Kubernetes service gives you access to Kubernetes clusters, without the hassle of installing or operating them. |
| 10 | +But for some specific usecases, you may customize nodes parameters. |
| 11 | + |
| 12 | +This guide will cover the modification of the Sysctl parameters which takes in charge the Inotify part and allows you to open more files simultaneously. |
| 13 | + |
| 14 | +## Requirements |
| 15 | + |
| 16 | +- A [Public Cloud project](https://www.ovhcloud.com/fr/public-cloud/) in your OVHcloud account |
| 17 | +- Access to the [OVHcloud Control Panel](https://www.ovh.com/auth/?action=gotomanager&from=https://www.ovh.com/fr/&ovhSubsidiary=fr) |
| 18 | +- An OVHcloud Managed Kubernetes cluster |
| 19 | +- Access to your OVHcloud Managed Kubernetes cluster through the Kubeconfig file |
| 20 | +- You must have the [`kubectl`](https://kubernetes.io/docs/reference/kubectl/overview/){.external} command-line tool installed |
| 21 | +- Your Kubeconfig is exported into your terminal following this guide : [Configuring Kubectl](/pages/public_cloud/containers_orchestration/managed_kubernetes/configuring-kubectl-on-an-ovh-managed-kubernetes-cluster) |
| 22 | + |
| 23 | +## Instructions |
| 24 | + |
| 25 | +### Step 1: Create a privileged DaemonSet and define the value to modify |
| 26 | + |
| 27 | +Create a YAML named ```sysctl-tuner-daemonset.yaml``` with the content below: |
| 28 | + |
| 29 | +```yaml |
| 30 | +apiVersion: apps/v1 |
| 31 | +kind: DaemonSet |
| 32 | +metadata: |
| 33 | + name: sysctl-tuner |
| 34 | +spec: |
| 35 | + selector: |
| 36 | + matchLabels: |
| 37 | + name: sysctl-tuner |
| 38 | + template: |
| 39 | + metadata: |
| 40 | + labels: |
| 41 | + name: sysctl-tuner |
| 42 | + spec: |
| 43 | + containers: |
| 44 | + - name: sysctl |
| 45 | + image: busybox |
| 46 | + securityContext: |
| 47 | + privileged: true |
| 48 | + command: |
| 49 | + - sh |
| 50 | + - -c |
| 51 | + - "sysctl -w fs.inotify.max_user_watches=<value>" # Define the value you need |
| 52 | + hostNetwork: true |
| 53 | + hostPID: true |
| 54 | + tolerations: |
| 55 | + - operator: "Exists" # Allow running on all nodes, including tainted ones |
| 56 | +``` |
| 57 | +
|
| 58 | +And define the value of the sysctl key ```fs.inotify.max_user_watches``` based on your applications' needs. |
| 59 | + |
| 60 | +The DaemonSet will set the sysctl parameter "fs.inotify.max_user_watches" with the value you provided in the DaemonSet configuration on all nodes deployed into your Kubernetes Cluster. |
| 61 | + |
| 62 | +## Step 2: Test if the changes were applied correctly |
| 63 | + |
| 64 | +In order to check if the Inotify value has been changed properly, you can escalate a shell into a pod : |
| 65 | + |
| 66 | +```bash |
| 67 | +$ kubectl exec -it <pod> -- cat /proc/sys/fs/inotify/max_user_watches |
| 68 | +``` |
| 69 | + |
| 70 | +The output should be the same as the value you defined in the DaemonSet YAML defined above. |
| 71 | + |
| 72 | +Congratulations ! You've successfully modified the maximum Inotify "max_user_watches" value on Managed Kubernetes Service. |
| 73 | + |
| 74 | +## Go further |
| 75 | + |
| 76 | +To deploy your first application on your Kubernetes cluster, we suggest you refer to our guide to [Deploying an application](/pages/public_cloud/containers_orchestration/managed_kubernetes/deploying-an-application). |
| 77 | + |
| 78 | +- If you need training or technical assistance to implement our solutions, contact your sales representative or click on [this link](https://www.ovhcloud.com/fr/professional-services/) to get a quote and ask our Professional Services experts for assisting you on your specific use case of your project. |
| 79 | + |
| 80 | +- Join our [community of users](https://community.ovh.com/en/). |
| 81 | + |
| 82 | + |
0 commit comments