diff --git a/changelog/february2025/2025-02-25-kubernetes-added-data-plane-logs-in-cockpit.mdx b/changelog/february2025/2025-02-25-kubernetes-added-data-plane-logs-in-cockpit.mdx index b6a0df6c44..42580209b1 100644 --- a/changelog/february2025/2025-02-25-kubernetes-added-data-plane-logs-in-cockpit.mdx +++ b/changelog/february2025/2025-02-25-kubernetes-added-data-plane-logs-in-cockpit.mdx @@ -9,6 +9,6 @@ category: containers product: kubernetes --- -**Centralized monitoring is now available**, allowing you to send Kubernetes container logs to Cockpit for streamlined monitoring. Setup is easy with **one-click deployment** via Easy Deploy using Promtail. This feature captures **all container logs**, including pod stdout/stderr and systemd journal. Additionally, you can control ingestion costs with **customizable filtering options**. +**Centralized monitoring is now available**, allowing you to send Kubernetes container logs to Cockpit for streamlined monitoring. Setup is easy with **one-click deployment** via Easy Deploy using Promtail. This feature captures **all container logs**, including Pod stdout/stderr and systemd journal. Additionally, you can control ingestion costs with **customizable filtering options**. Learn more in our dedicated documentation: [Monitor Data Plane with Cockpit](https://www.scaleway.com/en/docs/kubernetes/how-to/monitor-data-plane-with-cockpit/) diff --git a/pages/cockpit/how-to/configure-alerts-for-scw-resources.mdx b/pages/cockpit/how-to/configure-alerts-for-scw-resources.mdx index c1f6da2faa..cb85c582e8 100644 --- a/pages/cockpit/how-to/configure-alerts-for-scw-resources.mdx +++ b/pages/cockpit/how-to/configure-alerts-for-scw-resources.mdx @@ -54,7 +54,7 @@ Data source managed alert rules allow you to configure alerts managed by the dat ## Define your metric and alert conditions -Switch between the tabs below to create alerts for a Scaleway Instance, an Object Storage bucket, a Kubernetes cluster pod, or Cockpit logs. +Switch between the tabs below to create alerts for a Scaleway Instance, an Object Storage bucket, a Kubernetes cluster Pod, or Cockpit logs. @@ -105,15 +105,15 @@ Switch between the tabs below to create alerts for a Scaleway Instance, an Objec 6. Click **Save rule and exit** in the top right corner of your screen to save and activate your alert. 7. Optionally, check that your configuration works by temporarily lowering the threshold. This will trigger the alert and notify your [contacts](/cockpit/concepts/#contact-points). - - The steps below explain how to create the metric selection and configure an alert condition that triggers when **no new pod activity occurs, which could mean your cluster is stuck or unresponsive.** + + The steps below explain how to create the metric selection and configure an alert condition that triggers when **no new Pod activity occurs, which could mean your cluster is stuck or unresponsive.** 1. In the query field next to the **Loading metrics... >** button, paste the following query. Make sure that the values for the labels you have selected (for example, `resource_name`) correspond to those of the target resource. ```bash rate(kubernetes_cluster_k8s_shoot_nodes_pods_usage_total{resource_name="k8s-par-quizzical-chatelet"}[15m]) == 0 ``` - The `kubernetes_cluster_k8s_shoot_nodes_pods_usage_total` metric represents the total number of pods currently running across all nodes in your Kubernetes cluster. It is helpful to monitor current pod consumption per node pool or cluster, and help track resource saturation or unexpected workload spikes. + The `kubernetes_cluster_k8s_shoot_nodes_pods_usage_total` metric represents the total number of Pods currently running across all nodes in your Kubernetes cluster. It is helpful to monitor current Pod consumption per node pool or cluster, and help track resource saturation or unexpected workload spikes. 2. In the **Set alert evaluation behavior** field, specify how long the condition must be true before triggering the alert. 3. Enter a name in the **Namespace** and **Group** fields to categorize and manage your alert rules. Rules that share the same group will use the same configuration, including the evaluation interval which determines how often the rule is evaluated (by default: every 1 minute). You can modify this interval later in the group settings. diff --git a/pages/cockpit/how-to/send-logs-from-k8s-to-cockpit.mdx b/pages/cockpit/how-to/send-logs-from-k8s-to-cockpit.mdx index 4f1ae0ed13..5aa55646b3 100644 --- a/pages/cockpit/how-to/send-logs-from-k8s-to-cockpit.mdx +++ b/pages/cockpit/how-to/send-logs-from-k8s-to-cockpit.mdx @@ -1,6 +1,6 @@ --- title: How to send logs from your Kubernetes cluster to your Cockpit -description: Learn how to send your pod logs to your Cockpit using Scaleway's comprehensive guide. This tutorial covers sending Kubernetes pods logs to Scaleway's Cockpit for centralized monitoring and analysis using Grafana, ensuring efficient monitoring and log analysis in your infrastructure. +description: Learn how to send your Pod logs to your Cockpit using Scaleway's comprehensive guide. This tutorial covers sending Kubernetes Pods logs to Scaleway's Cockpit for centralized monitoring and analysis using Grafana, ensuring efficient monitoring and log analysis in your infrastructure. tags: kubernetes cockpit logs observability monitoring cluster dates: validation: 2025-08-20 @@ -93,7 +93,7 @@ Once you have configured your `values.yml` file, you can use Helm to deploy the The `-f` flag specifies the path to your `values.yml` file, which contains the configuration for the Helm chart.

Helm installs the `k8s-monitoring` chart, which includes the Alloy DaemonSet configured to collect logs from your Kubernetes cluster.

- The DaemonSet ensures that a pod is running on each node in your cluster, which collects logs and forwards them to the specified Loki endpoint in your Cockpit. + The DaemonSet ensures that a Pod is running on each node in your cluster, which collects logs and forwards them to the specified Loki endpoint in your Cockpit.
3. Optionally, run the following command to check the status of the release and ensure it was installed: diff --git a/pages/cockpit/how-to/send-metrics-from-k8s-to-cockpit.mdx b/pages/cockpit/how-to/send-metrics-from-k8s-to-cockpit.mdx index b42410402d..e951958299 100644 --- a/pages/cockpit/how-to/send-metrics-from-k8s-to-cockpit.mdx +++ b/pages/cockpit/how-to/send-metrics-from-k8s-to-cockpit.mdx @@ -1,6 +1,6 @@ --- title: How to send metrics from your Kubernetes cluster to your Cockpit -description: Learn how to send your pod metrics to your Cockpit using Scaleway's comprehensive guide. This tutorial covers sending Kubernetes pods metrics to Scaleway's Cockpit for centralized monitoring and analysis using Grafana, ensuring efficient monitoring and metrics analysis in your infrastructure. +description: Learn how to send your Pod metrics to your Cockpit using Scaleway's comprehensive guide. This tutorial covers sending Kubernetes Pods metrics to Scaleway's Cockpit for centralized monitoring and analysis using Grafana, ensuring efficient monitoring and metrics analysis in your infrastructure. tags: kubernetes cockpit metrics observability monitoring cluster dates: validation: 2025-08-20 @@ -70,7 +70,7 @@ alloy-singleton: ## Add annotations for auto-discovery -Annotations in Kubernetes provide a way to attach metadata to your resources. For `k8s-monitoring`, these annotations signal which pods should be scraped for metrics, and what port to use. In this documentation we are adding annotations to specify we want `k8s-monitoring` to scrape the pods from our deployment. Make sure that you replace `$METRICS_PORT` with the port where your application exposes Prometheus metrics. +Annotations in Kubernetes provide a way to attach metadata to your resources. For `k8s-monitoring`, these annotations signal which Pods should be scraped for metrics, and what port to use. In this documentation we are adding annotations to specify we want `k8s-monitoring` to scrape the Pods from our deployment. Make sure that you replace `$METRICS_PORT` with the port where your application exposes Prometheus metrics. ### Kubernetes deployment template @@ -153,7 +153,7 @@ Once you have configured your `values.yml` file, you can use Helm to deploy the The `-f` flag specifies the path to your `values.yml` file, which contains the configuration for the Helm chart.

Helm installs the `k8s-monitoring` chart, which includes the Alloy DaemonSet configured to collect metrics from your Kubernetes cluster.

- The DaemonSet ensures that a pod is running on each node in your cluster, which collects metrics and forwards them to the specified Prometheus endpoint in your Cockpit. + The DaemonSet ensures that a Pod is running on each node in your cluster, which collects metrics and forwards them to the specified Prometheus endpoint in your Cockpit.
3. Optionally, check the status of the release to ensure it was installed: diff --git a/pages/data-lab/concepts.mdx b/pages/data-lab/concepts.mdx index 068b3588cb..6e8d9269b7 100644 --- a/pages/data-lab/concepts.mdx +++ b/pages/data-lab/concepts.mdx @@ -8,7 +8,7 @@ dates: ## Apache Spark cluster -An Apache Spark cluster is an orchestrated set of machines over which distributed/Big data calculus is processed. In the case of Scaleway Data Lab, the Apache Spark cluster is a Kubernetes cluster, with Apache Spark installed in each pod. For more details, check out the [Apache Spark documentation](https://spark.apache.org/documentation.html). +An Apache Spark cluster is an orchestrated set of machines over which distributed/Big data calculus is processed. In the case of Scaleway Data Lab, the Apache Spark cluster is a Kubernetes cluster, with Apache Spark installed in each Pod. For more details, check out the [Apache Spark documentation](https://spark.apache.org/documentation.html). ## Data Lab @@ -40,7 +40,7 @@ A notebook for an Apache Spark cluster is an interactive, web-based tool that al ## Persistent volume -A Persistent Volume (PV) is a cluster-wide storage resource that ensures data persistence beyond the lifecycle of individual pods. Persistent volumes abstract the underlying storage details, allowing administrators to use various storage solutions. +A Persistent Volume (PV) is a cluster-wide storage resource that ensures data persistence beyond the lifecycle of individual Pods. Persistent volumes abstract the underlying storage details, allowing administrators to use various storage solutions. Apache Spark® executors require storage space for various operations, particularly to shuffle data during wide operations such as sorting, grouping, and aggregation. Wide operations are transformations that require data from different partitions to be combined, often resulting in data movement across the cluster. During the map phase, executors write data to shuffle storage, which is then read by reducers. diff --git a/pages/gpu/how-to/use-mig-with-kubernetes.mdx b/pages/gpu/how-to/use-mig-with-kubernetes.mdx index eec6d316a4..bafcf8e5ad 100644 --- a/pages/gpu/how-to/use-mig-with-kubernetes.mdx +++ b/pages/gpu/how-to/use-mig-with-kubernetes.mdx @@ -32,7 +32,7 @@ In this guide, we will explore the capabilities of NVIDIA MIG within a Kubernete ## Configure MIG partitions inside a Kubernetes cluster -1. Find the name of the pods running the Nvidia Driver: +1. Find the name of the Pods running the Nvidia Driver: ``` % kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE @@ -163,7 +163,7 @@ In this guide, we will explore the capabilities of NVIDIA MIG within a Kubernete ## Deploy containers that use NVIDIA MIG technology partitions -1. Write a deployment file to deploy 8 pods executing NVIDIA SMI. +1. Write a deployment file to deploy 8 Pods executing NVIDIA SMI. Open a text editor of your choice and create a deployment file `deploy-mig.yaml`, then paste the following content into the file, save it, and exit the editor: ```yaml apiVersion: v1 @@ -321,7 +321,7 @@ In this guide, we will explore the capabilities of NVIDIA MIG within a Kubernete nvidia.com/gpu.product : NVIDIA-H100-PCIe-MIG-1g.10gb ``` -2. Deploy the pods: +2. Deploy the Pods: ``` % kubectl create -f deploy-mig.yaml pod/test-1 created @@ -334,7 +334,7 @@ In this guide, we will explore the capabilities of NVIDIA MIG within a Kubernete pod/test-8 created ``` -3. Display the logs of the pods. The pods print their UUID with the `nvidia-smi` command: +3. Display the logs of the Pods. The Pods print their UUID with the `nvidia-smi` command: ``` % kubectl get -f deploy-mig.yaml -o name | xargs -I{} kubectl logs {} GPU 0: NVIDIA H100 PCIe (UUID: GPU-717ef73c-2d43-4fdc-76d2-1cddef4863bb) @@ -354,7 +354,7 @@ In this guide, we will explore the capabilities of NVIDIA MIG within a Kubernete GPU 0: NVIDIA H100 PCIe (UUID: GPU-717ef73c-2d43-4fdc-76d2-1cddef4863bb) MIG 1g.10gb Device 0: (UUID: MIG-fdfd2afa-5cbd-5d1d-b1ae-6f0e13cc0ff8) ``` - As you can see, seven pods have been executed on different MIG partitions, while the eighth pod had to wait for one of the seven MIG partitions to become available to be executed. + As you can see, seven Pods have been executed on different MIG partitions, while the eighth Pod had to wait for one of the seven MIG partitions to become available to be executed. 4. Clean the deployment: ``` @@ -377,7 +377,7 @@ In this guide, we will explore the capabilities of NVIDIA MIG within a Kubernete node/scw-k8s-jovial-dubinsky-pool-h100-93a072191d38 labeled ``` -2. Check the status of NVIDIA SMI in the driver pod: +2. Check the status of NVIDIA SMI in the driver Pod: ``` % kubectl exec nvidia-driver-daemonset-8t89m -t -n kube-system -- nvidia-smi -L GPU 0: NVIDIA H100 PCIe (UUID: GPU-717ef73c-2d43-4fdc-76d2-1cddef4863bb) diff --git a/pages/gpu/reference-content/choosing-gpu-instance-type.mdx b/pages/gpu/reference-content/choosing-gpu-instance-type.mdx index 4ff3cdd65f..e173a1695e 100644 --- a/pages/gpu/reference-content/choosing-gpu-instance-type.mdx +++ b/pages/gpu/reference-content/choosing-gpu-instance-type.mdx @@ -30,7 +30,7 @@ Below, you will find a guide to help you make an informed decision: * Up to 2 PCIe GPU with [H100 Instances](https://www.scaleway.com/en/h100-pcie-try-it-now/) or 8 PCIe GPU with [L4](https://www.scaleway.com/en/l4-gpu-instance/) or [L4OS](https://www.scaleway.com/en/contact-l40s/) Instances. * Or better, an HGX-based server setup with up to 8x NVlink GPUs with [H100-SXM Instances](/gpu/reference-content/choosing-gpu-instance-type/) * A [supercomputer architecture](https://www.scaleway.com/en/ai-supercomputers/) for a larger setup for workload-intensive tasks - * Another way to scale your workload is to use [Kubernetes and MIG](/gpu/how-to/use-nvidia-mig-technology/): You can divide a single H100 or H100-SXM GPU into as many as 7 MIG partitions. This means that instead of employing seven P100 GPUs to set up seven K8S pods, you could opt for a single H100 GPU with MIG to effectively deploy all seven K8S pods. + * Another way to scale your workload is to use [Kubernetes and MIG](/gpu/how-to/use-nvidia-mig-technology/): You can divide a single H100 or H100-SXM GPU into as many as 7 MIG partitions. This means that instead of employing seven P100 GPUs to set up seven K8S Pods, you could opt for a single H100 GPU with MIG to effectively deploy all seven K8S Pods. * **Online resources:** Check for online resources, forums, and community discussions related to the specific GPU type you are considering. This can provide insights into common issues, best practices, and optimizations. Remember that there is no one-size-fits-all answer, and the right GPU Instance type will depend on your workload’s unique requirements and budget. It is important that you regularly reassess your choice as your workload evolves. Depending on which type best fits your evolving tasks, you can easily migrate from one GPU Instance type to another. diff --git a/pages/gpu/reference-content/kubernetes-gpu-time-slicing.mdx b/pages/gpu/reference-content/kubernetes-gpu-time-slicing.mdx index d9833f712a..7ebd75c224 100644 --- a/pages/gpu/reference-content/kubernetes-gpu-time-slicing.mdx +++ b/pages/gpu/reference-content/kubernetes-gpu-time-slicing.mdx @@ -1,6 +1,6 @@ --- title: NVIDIA GPU time-slicing with Kubernetes -description: Learn how NVIDIA GPU time-slicing with Kubernetes enables efficient GPU resource sharing among containers or pods. Explore operational procedures, management, and comparisons with MIG technology. +description: Learn how NVIDIA GPU time-slicing with Kubernetes enables efficient GPU resource sharing among containers or Pods. Explore operational procedures, management, and comparisons with MIG technology. tags: gpu nvidia dates: validation: 2025-05-26 @@ -9,35 +9,35 @@ dates: NVIDIA GPUs are powerful hardware commonly used for model training, deep learning, scientific simulations, and data processing tasks. On the other hand, Kubernetes (K8s) is a container orchestration platform that helps manage and deploy containerized applications. -Time-slicing in the context of NVIDIA GPUs and Kubernetes refers to sharing a physical GPU among multiple containers or pods in a Kubernetes cluster. +Time-slicing in the context of NVIDIA GPUs and Kubernetes refers to sharing a physical GPU among multiple containers or Pods in a Kubernetes cluster. -The technology involves partitioning the GPU's processing time into smaller intervals and allocating those intervals to different containers or pods. This technique allows multiple workloads to run on the same physical GPU, effectively sharing its resources while providing isolation between the different workloads. +The technology involves partitioning the GPU's processing time into smaller intervals and allocating those intervals to different containers or Pods. This technique allows multiple workloads to run on the same physical GPU, effectively sharing its resources while providing isolation between the different workloads. ## Operational procedures of GPU time-slicing in Kubernetes Time-slicing NVIDIA GPUs with Kubernetes involves: -* Dynamically allocating and sharing GPU resources among multiple containers or pods in a cluster. -* Allowing each pod or container to use the GPU for a specific time interval before switching to another. +* Dynamically allocating and sharing GPU resources among multiple containers or Pods in a cluster. +* Allowing each Pod or container to use the GPU for a specific time interval before switching to another. * Efficiently using the available GPU capacity. This allows multiple workloads to use the GPU by taking turns in rapid succession. -* **GPU sharing:** Time-slicing involves sharing a single GPU among containers or pods by allocating small time intervals. Sharing is achieved by rapidly switching between different containers or pods, allowing them to use the GPU for a short duration before moving on to the next workload. +* **GPU sharing:** Time-slicing involves sharing a single GPU among containers or Pods by allocating small time intervals. Sharing is achieved by rapidly switching between different containers or Pods, allowing them to use the GPU for a short duration before moving on to the next workload. * **GPU context switching:** Refers to saving one workload's state, loading another's, and resuming processing. Modern GPUs are designed to handle context switching efficiently. ## Management of GPU time-slicing within the Kubernetes cluster Several elements within the Kubernetes cluster oversee the time-slicing of GPUs: -* **GPU scheduling:** Kubernetes employs a scheduler that determines which containers or pods get access to GPUs and when. This scheduling is based on resource requests, limits, and the available GPUs on the nodes in the cluster. +* **GPU scheduling:** Kubernetes employs a scheduler that determines which containers or Pods get access to GPUs and when. This scheduling is based on resource requests, limits, and the available GPUs on the nodes in the cluster. * **GPU device plugin:** Kubernetes uses the NVIDIA GPU device plugin to expose the GPUs available on each node to the cluster's scheduler. This plugin helps the scheduler make informed decisions about GPU allocation. -* **Container GPU requests and limits:** When defining a container or pod in Kubernetes, you can specify GPU requests and limits. The requests represent the minimum required GPU resources, while the limits define the maximum allowed GPU usage. These values guide the Kubernetes scheduler in making placement decisions. +* **Container GPU requests and limits:** When defining a container or Pod in Kubernetes, you can specify GPU requests and limits. The requests represent the minimum required GPU resources, while the limits define the maximum allowed GPU usage. These values guide the Kubernetes scheduler in making placement decisions. ## Time-slicing compared to MIG The most recent versions of NVIDIA GPUs introduce [Multi-instance GPU (MIG) mode](/gpu/how-to/use-nvidia-mig-technology/). Fully integrated into Kubernetes in 2020, MIG allows a single GPU to be partitioned into smaller, predefined instances, essentially resembling miniaturized GPUs. These instances provide memory and fault isolation directly at the hardware level. Instead of using the entire native GPU, you can run workloads on one of these predefined instances, enabling shared GPU access. -Kubernetes GPU time-slicing divides the GPU resources at the container level within a Kubernetes cluster. Multiple containers (pods) share a single GPU, whereas MIG divides the GPU resources at the hardware level. Each MIG instance behaves like a separate GPU. +Kubernetes GPU time-slicing divides the GPU resources at the container level within a Kubernetes cluster. Multiple containers (Pods) share a single GPU, whereas MIG divides the GPU resources at the hardware level. Each MIG instance behaves like a separate GPU. While time-slicing facilitates shared GPU access across a broader user spectrum, it comes with a trade-off. It sacrifices the memory and fault isolation advantages inherent to MIG. Additionally, it presents a solution to enable shared GPU access on earlier GPU generations lacking MIG support. Combining MIG and time-slicing is feasible to expand the scope of shared access to MIG instances. diff --git a/pages/gpu/troubleshooting/index.mdx b/pages/gpu/troubleshooting/index.mdx index 6738d6a718..7986e3dbcb 100644 --- a/pages/gpu/troubleshooting/index.mdx +++ b/pages/gpu/troubleshooting/index.mdx @@ -37,7 +37,7 @@ productIcon: GpuServersProductIcon diff --git a/pages/kubernetes/api-cli/cluster-monitoring.mdx b/pages/kubernetes/api-cli/cluster-monitoring.mdx index 6b7c52d203..33818996e9 100644 --- a/pages/kubernetes/api-cli/cluster-monitoring.mdx +++ b/pages/kubernetes/api-cli/cluster-monitoring.mdx @@ -60,7 +60,7 @@ Deploy the Prometheus stack in a dedicated Kubernetes [namespace](https://kubern STATUS: DEPLOYED [..] ``` -4. Verify that the created pods are all running once the stack is deployed. You can also check whether the 100Gi block volume was created: +4. Verify that the created Pods are all running once the stack is deployed. You can also check whether the 100Gi block volume was created: ```bash kubectl get pods,pv,pvc -n monitoring NAME READY STATUS RESTARTS AGE @@ -166,7 +166,7 @@ The `loki` application is not included in the default Helm repositories. ...Successfully got an update from the "grafana" chart repository Update Complete. ⎈Happy Helming!⎈ ``` -2. Install the `loki-stack` with Helm. Install all the stack in a Kubernetes dedicated [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) named `loki-stack`. Deploy it to your cluster and enable persistence (allow Helm to create a Scaleway block device and attach it to the Loki pod to store its data) using a Kubernetes [Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) to survive a pod re-schedule: +2. Install the `loki-stack` with Helm. Install all the stack in a Kubernetes dedicated [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) named `loki-stack`. Deploy it to your cluster and enable persistence (allow Helm to create a Scaleway block device and attach it to the Loki Pod to store its data) using a Kubernetes [Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) to survive a Pod re-schedule: ```bash helm install loki-stack grafana/loki-stack \ --create-namespace \ @@ -198,7 +198,7 @@ The `loki` application is not included in the default Helm repositories. persistentvolumeclaim/loki-grafana Bound pvc-88038939-24a5-4383-abe8-f3aab97b7ce7 10Gi RWO scw-bssd 19s persistentvolumeclaim/storage-loki-stack-0 Bound pvc-c6fce993-a73d-4423-9464-7c10ab009062 100Gi RWO scw-bssd 5m3s ``` -5. Now that both Loki and Grafana are installed in the cluster, check if the pods are correctly running: +5. Now that both Loki and Grafana are installed in the cluster, check if the Pods are correctly running: ```bash kubectl get pods -n loki-stack @@ -224,4 +224,4 @@ The `loki` application is not included in the default Helm repositories. 10. Check you can access your logs using the explore tab in Grafana: -You now have a Loki stack up and running. All your pods’ logs will be stored in Loki and you will be able to view and query your applications’ logs in Grafana. Refer to the [Loki documentation](https://grafana.com/docs/features/datasources/loki/), if you want to learn more about querying the Loki data source. \ No newline at end of file +You now have a Loki stack up and running. All your Pods’ logs will be stored in Loki and you will be able to view and query your applications’ logs in Grafana. Refer to the [Loki documentation](https://grafana.com/docs/features/datasources/loki/), if you want to learn more about querying the Loki data source. \ No newline at end of file diff --git a/pages/kubernetes/concepts.mdx b/pages/kubernetes/concepts.mdx index 6d5bccf7d7..238332cba3 100644 --- a/pages/kubernetes/concepts.mdx +++ b/pages/kubernetes/concepts.mdx @@ -46,11 +46,11 @@ The container runtime is the software that is responsible for running containers ## Control plane -The control plane manages the worker nodes and the pods in the cluster. In production environments, the control plane usually runs across multiple computers, and a cluster usually runs multiple nodes, providing fault-tolerance and high availability. Scaleway manages the control plane and associated Load Balancers. Consider the following when creating a control plane: +The control plane manages the worker nodes and the Pods in the cluster. In production environments, the control plane usually runs across multiple computers, and a cluster usually runs multiple nodes, providing fault-tolerance and high availability. Scaleway manages the control plane and associated Load Balancers. Consider the following when creating a control plane: - A cluster belongs to one region. - As the cluster's control plane and Load Balancer are managed by Scaleway, it is not possible to access them directly or configure them individually. -- A cluster requires a minimum of one pool of worker machines to deploy Kubernetes resources. Note that pods must run on a worker node. +- A cluster requires a minimum of one pool of worker machines to deploy Kubernetes resources. Note that Pods must run on a worker node. ## Easy Deploy @@ -86,7 +86,7 @@ Kubernetes Kapsule provides a managed environment for you to create, configure, ## Kubernetes Kosmos -Kubernetes Kosmos is the first-of-its-kind managed multi-cloud Kubernetes Engine. It allows the connection of Instances and servers from any Cloud provider to a single managed control plane hosted by Scaleway. Using Kubernetes in a multi-cloud cluster provides a high level of application redundancy by authorizing pod replication across different providers, regions, and Availability Zones. See our documentation to learn [how to create a Kubernetes Kosmos cluster](/kubernetes/how-to/create-kosmos-cluster/). +Kubernetes Kosmos is the first-of-its-kind managed multi-cloud Kubernetes Engine. It allows the connection of Instances and servers from any Cloud provider to a single managed control plane hosted by Scaleway. Using Kubernetes in a multi-cloud cluster provides a high level of application redundancy by authorizing Pod replication across different providers, regions, and Availability Zones. See our documentation to learn [how to create a Kubernetes Kosmos cluster](/kubernetes/how-to/create-kosmos-cluster/). [Learn more about the differences between Kapsule and Kosmos](/kubernetes/reference-content/understanding-differences-kapsule-kosmos/). @@ -107,11 +107,11 @@ Namespaces are used in Kubernetes to divide the same cluster resources between m ## Node -Kubernetes runs your workload by placing containers into pods to run on nodes. A node may be a virtual or physical machine, depending on the cluster. Each node is managed by the control plane and contains the services necessary to run pods. +Kubernetes runs your workload by placing containers into Pods to run on nodes. A node may be a virtual or physical machine, depending on the cluster. Each node is managed by the control plane and contains the services necessary to run Pods. ## Pods -A pod is the smallest and simplest unit in the Kubernetes object model. Containers are not directly assigned to hosts in Kubernetes. Instead, one or multiple containers that are working closely together are bundled in a pod together, sharing a unique network address, storage resources and information on how to govern the containers. +A Pod is the smallest and simplest unit in the Kubernetes object model. Containers are not directly assigned to hosts in Kubernetes. Instead, one or multiple containers that are working closely together are bundled in a Pod together, sharing a unique network address, storage resources and information on how to govern the containers. ## Pool @@ -122,16 +122,16 @@ The pool resource is a group of Scaleway Instances, organized by type (e.g., GP1 ## ReplicaSet -The task of a ReplicaSet is to create and delete pods as needed to reach the desired status. A ReplicaSet contains: +The task of a ReplicaSet is to create and delete Pods as needed to reach the desired status. A ReplicaSet contains: -- Information about the number of pods it can acquire -- Information about the number of pods it maintains -- A pod template, specifying the data of new pods to meet the number of replicas criteria. - Each pod within a ReplicaSet can be identified via the `metadata.ownerReference` field, allowing the ReplicaSet to monitor each pod's state. +- Information about the number of Pods it can acquire +- Information about the number of Pods it maintains +- A Pod template, specifying the data of new Pods to meet the number of replicas criteria. + Each Pod within a ReplicaSet can be identified via the `metadata.ownerReference` field, allowing the ReplicaSet to monitor each Pod's state. ## Services -A service is an abstraction that defines a logical group of pods that perform the same function, and a policy on how to access them. The service provides a stable endpoint (IP address) and acts as a Load Balancer by redirecting requests to the different pods in the service. The service abstraction allows the scaling out or replacement of dead pods without making changes to the configuration of an application. +A service is an abstraction that defines a logical group of Pods that perform the same function, and a policy on how to access them. The service provides a stable endpoint (IP address) and acts as a Load Balancer by redirecting requests to the different Pods in the service. The service abstraction allows the scaling out or replacement of dead Pods without making changes to the configuration of an application. By default, services are only available using internally routable IP addresses, but can be exposed publicly. This can be done using the `NodePort` configuration, which opens a static port on each node's external networking interface. Alternatively, it is also possible to use the `load-balancer` service, which creates an external Load Balancer at a cloud provider using Kubernetes `load-balancer` integration. diff --git a/pages/kubernetes/how-to/create-cluster.mdx b/pages/kubernetes/how-to/create-cluster.mdx index a1ad2d5d05..d3d0c81d90 100644 --- a/pages/kubernetes/how-to/create-cluster.mdx +++ b/pages/kubernetes/how-to/create-cluster.mdx @@ -12,7 +12,7 @@ import Requirements from '@macros/iam/requirements.mdx' Scaleway Kubernetes Kapsule provides a managed environment for creating, configuring, and operating a cluster of preconfigured nodes for containerized applications. This service allows you to deploy [Kubernetes](https://kubernetes.io) clusters without the complexity of managing the underlying infrastructure. Key benefits include: - * Dynamic scaling of pods based on workload demands. + * Dynamic scaling of Pods based on workload demands. * Simplified cluster management via [kubectl](https://kubernetes.io/docs/reference/kubectl/overview/), the Kubernetes command-line tool. To facilitate cluster administration, Scaleway provides a `.kubeconfig` file, enabling you to manage your cluster locally using `kubectl`. This tool is essential for executing commands against Kubernetes clusters. diff --git a/pages/kubernetes/how-to/deploy-image-from-container-registry.mdx b/pages/kubernetes/how-to/deploy-image-from-container-registry.mdx index 14bfb664f5..069454f94a 100644 --- a/pages/kubernetes/how-to/deploy-image-from-container-registry.mdx +++ b/pages/kubernetes/how-to/deploy-image-from-container-registry.mdx @@ -11,7 +11,7 @@ import Requirements from '@macros/iam/requirements.mdx' In this how-to guide you learn how to create and push a container image to the Scaleway [Container Registry](/container-registry/concepts/#container-registry) and how to use it on [Kubernetes Kapsule](/kubernetes/concepts/#kubernetes-kapsule). -A container image consists of several bundled files, which encapsulate an application. This image can be built on a local machine, uploaded to the image registry, and then deployed on various Kubernetes pods with Kapsule. [Kapsule](/kubernetes/concepts/#kubernetes-kapsule) is the managed Kubernetes service provided by Scaleway. In this tutorial, we use [Docker](https://www.docker.com/) to build the containers. +A container image consists of several bundled files, which encapsulate an application. This image can be built on a local machine, uploaded to the image registry, and then deployed on various Kubernetes Pods with Kapsule. [Kapsule](/kubernetes/concepts/#kubernetes-kapsule) is the managed Kubernetes service provided by Scaleway. In this tutorial, we use [Docker](https://www.docker.com/) to build the containers. The generated Docker images are stored in a private Docker registry using the Scaleway [Container Registry](/container-registry/concepts/#container-registry) product. diff --git a/pages/kubernetes/how-to/deploy-x86-arm-images.mdx b/pages/kubernetes/how-to/deploy-x86-arm-images.mdx index 681eaae126..14bf70a155 100644 --- a/pages/kubernetes/how-to/deploy-x86-arm-images.mdx +++ b/pages/kubernetes/how-to/deploy-x86-arm-images.mdx @@ -27,13 +27,13 @@ These images contain binaries for multiple architectures, allowing Kubernetes to 1. Build multi-arch images. Docker supports multi-arch builds using `buildx`. 2. Push the built images to a container registry accessible by your Kubernetes cluster. For example, you can use the [Scaleway Container Registry](/container-registry/quickstart/). -3. Specify node selectors and affinity. Use either [node selectors](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector) and [affinity rules](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity) to ensure pods are scheduled on nodes with compatible architectures. +3. Specify node selectors and affinity. Use either [node selectors](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector) and [affinity rules](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity) to ensure Pods are scheduled on nodes with compatible architectures. - Alternatively, use taints to mark nodes with specific architectures and tolerations to allow pods to run on those nodes. Refer to the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) for more information regarding taints and tolerations. + Alternatively, use taints to mark nodes with specific architectures and tolerations to allow Pods to run on those nodes. Refer to the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) for more information regarding taints and tolerations. #### Example -Below, you can find an example of a pod configuration with affinity set to target the `kubernetes.io/arch=arm64` label, which is present by default on Scaleway ARM nodes: +Below, you can find an example of a Pod configuration with affinity set to target the `kubernetes.io/arch=arm64` label, which is present by default on Scaleway ARM nodes: ```yaml apiVersion: v1 @@ -56,8 +56,8 @@ spec: # Add more container configurations as needed ``` -In this example, the pod's affinity is configured to be scheduled on nodes that have the label `kubernetes.io/arch` with the value `arm64`. -This ensures that the pod will only be scheduled on nodes with this architecture. +In this example, the Pod's affinity is configured to be scheduled on nodes that have the label `kubernetes.io/arch` with the value `arm64`. +This ensures that the Pod will only be scheduled on nodes with this architecture. Using multi-arch images you benefit from a diff --git a/pages/kubernetes/how-to/enable-easy-deploy.mdx b/pages/kubernetes/how-to/enable-easy-deploy.mdx index e4283df2f5..9e25faada9 100644 --- a/pages/kubernetes/how-to/enable-easy-deploy.mdx +++ b/pages/kubernetes/how-to/enable-easy-deploy.mdx @@ -56,7 +56,7 @@ You can also deploy off-the-shelf applications pre-configured for Scaleway produ You can configure the deployment of your clusters in two ways: **Deployments** or **CronJobs**. - A **Deployment** represents a set of identical pods with no individual identities managed by a deployment controller. The deployment controller runs multiple replicas of an application as specified in a ReplicaSet. If any pods fail or become unresponsive, the deployment controller replaces them until the actual state equals the desired state. **When using a deployment Kubernetes object, you do not need to manage your pods or ReplicaSet**. + A **Deployment** represents a set of identical Pods with no individual identities managed by a deployment controller. The deployment controller runs multiple replicas of an application as specified in a ReplicaSet. If any Pods fail or become unresponsive, the deployment controller replaces them until the actual state equals the desired state. **When using a deployment Kubernetes object, you do not need to manage your Pods or ReplicaSet**. You can set up a Load Balancer for your container, create several replicas and add environment variables, such as database host/credentials. diff --git a/pages/kubernetes/how-to/manage-node-pools.mdx b/pages/kubernetes/how-to/manage-node-pools.mdx index 3942098892..cda3a079cf 100644 --- a/pages/kubernetes/how-to/manage-node-pools.mdx +++ b/pages/kubernetes/how-to/manage-node-pools.mdx @@ -68,16 +68,16 @@ This documentation provides step-by-step instructions on how to manage Kubernete Ensure that the new node pool is properly labeled if necessary. 2. Run `kubectl get nodes` to check that the new nodes are in a `Ready` state. -3. Cordon the nodes in the old node pool to prevent new pods from being scheduled there. For each node, run: `kubectl cordon ` +3. Cordon the nodes in the old node pool to prevent new Pods from being scheduled there. For each node, run: `kubectl cordon ` You can use a selector on the pool name label to cordon or drain multiple nodes at the same time if your app allows it (ex. `kubectl cordon -l k8s.scaleway.com/pool-name=mypoolname`) -4. Drain the nodes to evict the pods gracefully. +4. Drain the nodes to evict the Pods gracefully. - For each node, run: `kubectl drain --ignore-daemonsets --delete-emptydir-data` - - The `--ignore-daemonsets` flag is used because daemon sets manage pods across all nodes and will automatically reschedule them. - - The `--delete-emptydir-data` flag is necessary if your pods use emptyDir volumes, but use this option carefully as it will delete the data stored in these volumes. + - The `--ignore-daemonsets` flag is used because daemon sets manage Pods across all nodes and will automatically reschedule them. + - The `--delete-emptydir-data` flag is necessary if your Pods use emptyDir volumes, but use this option carefully as it will delete the data stored in these volumes. - Refer to the [official Kubernetes documentation](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/) for further information. -5. Run `kubectl get pods -o wide` after draining, to verify that the pods have been rescheduled to the new node pool. +5. Run `kubectl get pods -o wide` after draining, to verify that the Pods have been rescheduled to the new node pool. 6. [Delete the old node pool](#how-to-delete-an-existing-kubernetes-kapsule-node-pool) once you confirm that all workloads are running smoothly on the new node pool. ## How to delete an existing Kubernetes Kapsule node pool diff --git a/pages/kubernetes/how-to/monitor-cluster.mdx b/pages/kubernetes/how-to/monitor-cluster.mdx index 616583948f..dff9059220 100644 --- a/pages/kubernetes/how-to/monitor-cluster.mdx +++ b/pages/kubernetes/how-to/monitor-cluster.mdx @@ -56,7 +56,7 @@ Grafana's rich visualizations and ease of use make it an ideal choice. Cockpit o The cluster overview dashboard offers real-time monitoring capabilities for the `kube-apiserver` within your Kubernetes cluster. Serving as a crucial component on the control plane, the `kube-apiserver` acts as the gateway to the Kubernetes API, allowing you to interact with the cluster. -Large clusters with numerous resources (nodes, pods, and custom resources (CRDs)) and high controller requests (e.g. argocd and velero) can cause CPU and memory spikes, leading to sluggish or unresponsive API server performance. You may also encounter errors like `"EOF"` when using `kubectl`. +Large clusters with numerous resources (nodes, Pods, and custom resources (CRDs)) and high controller requests (e.g. argocd and velero) can cause CPU and memory spikes, leading to sluggish or unresponsive API server performance. You may also encounter errors like `"EOF"` when using `kubectl`. To address this potential issue, it is crucial to monitor the CPU and RAM consumption of the apiserver closely. By doing so, you can proactively manage and reduce the load on the apiserver, thus averting performance bottlenecks. diff --git a/pages/kubernetes/how-to/monitor-data-plane-with-cockpit.mdx b/pages/kubernetes/how-to/monitor-data-plane-with-cockpit.mdx index 2ffda403eb..574c4f0bce 100644 --- a/pages/kubernetes/how-to/monitor-data-plane-with-cockpit.mdx +++ b/pages/kubernetes/how-to/monitor-data-plane-with-cockpit.mdx @@ -14,7 +14,7 @@ You can now send **data plane** logs from your [Kapsule](https://www.scaleway.co This feature allows you to: - **Enhance observability**: View logs from all your Kubernetes containers in one place. -- **Simplify troubleshooting**: Quickly drill down into specific pods or containers without needing to configure a separate logging stack. +- **Simplify troubleshooting**: Quickly drill down into specific Pods or containers without needing to configure a separate logging stack. This feature does incur costs based on the volume of logs ingested. Refer to [Cockpit FAQ](/cockpit/faq/#how-am-i-billed-for-using-cockpit-with-custom-data) for more details and best practices to avoid unexpected bills. @@ -49,7 +49,7 @@ Because the data plane is entirely under your control, **logs from any component The system leverages **Promtail** (a lightweight log collector) running on your Kapsule or Kosmos cluster. Promtail forwards logs to the Loki endpoint of your Cockpit instance: 1. **Promtail** can collect logs from: - - **Container stdout/stderr** (pods) + - **Container stdout/stderr** (Pods) - **systemd journal** (e.g., `kubelet.service`) 2. The app automatically creates a custom datasource called `kubernetes-logs` and a Cockpit token with push logs permission. 3. **Log data** is transmitted to **Cockpit** (Loki). @@ -84,7 +84,7 @@ config: snippets: scrapeConfigs: | - {{{- cockpit_promtail_scrape_config_pods }}} # Default: log all pods + {{{- cockpit_promtail_scrape_config_pods }}} # Default: log all Pods {{{- cockpit_promtail_scrape_config_journal }}} # Default: log all system components extraVolumeMounts: - mountPath: /var/log/journal @@ -105,8 +105,8 @@ Once Promtail is running: 1. Go to the **Cockpit** section of the Scaleway console, then click **Open dashboards**. 2. Log into Grafana using your [Cockpit credentials](/cockpit/how-to/retrieve-grafana-credentials/). -3. In Grafana's menu, navigate to **Dashboards** and select **Kubernetes Cluster Pod Logs** to view logs collected from pods in your clusters. -4. **Filter pod logs** by: +3. In Grafana's menu, navigate to **Dashboards** and select **Kubernetes Cluster Pod Logs** to view logs collected from Pods in your clusters. +4. **Filter Pod logs** by: - `Datasource` which is automatically created upon deployment and visible in the Cockpit console - `Cluster Name` ( e.g. `my-kapsule-cluster`) - `namespace`, `pod`, or `container` labels to isolate specific workloads @@ -153,7 +153,7 @@ Key points include: ## Troubleshooting - **No logs appearing** in Cockpit: - - Verify that the Promtail pod is running. + - Verify that the Promtail Pod is running. ```bash kubectl get pods -n ``` diff --git a/pages/kubernetes/how-to/use-scratch-storage-h100.mdx b/pages/kubernetes/how-to/use-scratch-storage-h100.mdx index 2213e22ab5..2cf36a7e97 100644 --- a/pages/kubernetes/how-to/use-scratch-storage-h100.mdx +++ b/pages/kubernetes/how-to/use-scratch-storage-h100.mdx @@ -26,9 +26,9 @@ Design your workloads or applications to take advantage of the fast and temporar - [Owner](/iam/concepts/#owner) status or [IAM permissions](/iam/concepts/#permission) allowing you to perform actions in the intended Organization - [Created](/kubernetes/how-to/create-cluster/) a Kubernetes Kapsule or Kosmos cluster that uses [H100 and L40S GPU Instances](https://www.scaleway.com/en/h100-pcie-try-it-now/) -In Kubernetes, H100 and L40S scratch storage is implemented as a directory on the host node's file system. This scratch volume is made available to containers within a pod using the `hostPath` volume type, allowing direct integration with Kubernetes workloads. +In Kubernetes, H100 and L40S scratch storage is implemented as a directory on the host node's file system. This scratch volume is made available to containers within a Pod using the `hostPath` volume type, allowing direct integration with Kubernetes workloads. -Configuring H100 and L40S scratch storage in Kubernetes involves specifying the `hostPath` volume type in the pod's volume definition. The following is an example configuration: +Configuring H100 and L40S scratch storage in Kubernetes involves specifying the `hostPath` volume type in the Pod's volume definition. The following is an example configuration: ```yaml apiVersion: apps/v1 diff --git a/pages/kubernetes/reference-content/exposing-services.mdx b/pages/kubernetes/reference-content/exposing-services.mdx index 8e6e6a4bb4..3154210cba 100644 --- a/pages/kubernetes/reference-content/exposing-services.mdx +++ b/pages/kubernetes/reference-content/exposing-services.mdx @@ -17,19 +17,19 @@ There are a number of different ways to expose your cluster to the internet. In ## Comparison of cluster exposure methods -In Kubernetes, you generally need to use a [Service](/kubernetes/concepts/#services) to expose an application in your cluster to the internet. A service groups together pods performing the same function (e.g. running the same application) and defines how to access them. +In Kubernetes, you generally need to use a [Service](/kubernetes/concepts/#services) to expose an application in your cluster to the internet. A service groups together Pods performing the same function (e.g. running the same application) and defines how to access them. -The most basic type of service is **clusterIP**, but this only provides internal access, from within the cluster, to the defined pods. The **NodePort** and **LoadBalancer** services both provide external access. **Ingress** (which is not a service but an API object inside a cluster) combined with an explicitly-created **ingress controller** is another way to expose the cluster. +The most basic type of service is **clusterIP**, but this only provides internal access, from within the cluster, to the defined Pods. The **NodePort** and **LoadBalancer** services both provide external access. **Ingress** (which is not a service but an API object inside a cluster) combined with an explicitly-created **ingress controller** is another way to expose the cluster. See the table below for more information. | Method | Description | Suitable for | Limitations | | ----------------- | ----------- | ------------ | ----------- | -| **Cluster IP Service** | • Provides internal connectivity between cluster components.
• Has a fixed IP address, from which it balances traffic between pods with matching labels. | Internal communication between different components within a cluster | Cannot be used to expose an application inside the cluster to the public internet. | -| **Node Port Service** | • Exposes a specific port on each node of a cluster.
• Forwards external traffic received on that port to the right pods. | • Exposing single-node, low-traffic clusters to the internet for free
• Testing. | • Not ideal for production or complex clusters
• Single point of failure.
• Not all port numbers can be opened.| -| **Load Balancer Service** | • Creates a single, external Load Balancer with a public IP.
• External LB forwards all traffic to the corresponding LoadBalancer service within the cluster.
• LoadBalancer service then forwards traffic to the right pods.
• Operates at the L4 level. | • Exposing a service in the cluster to the internet.
• Production envs (highly available).
• Dealing with TCP traffic. | • Each service in the cluster needs its own Load Balancer (can become costly). | -| **Ingress** | • A native resource inside the cluster (not a service).
• Ingress controller receives a single, external public IP, usually in front of a spun-up external HTTP Load Balancer.
• Uses a set of rules to forward web traffic (HTTP(S)) to the correct service out of multiple services within the cluster.
• Each service then sends the traffic to a suitable pod.
• Operates at the L7 level.| • Clusters with many services.
• Dealing with HTTP(S) traffic. | • Requires an ingress controller (not included by default, must be created).
• Designed for HTTP(S) traffic only (more complicated to configure for other protocols). | +| **Cluster IP Service** | • Provides internal connectivity between cluster components.
• Has a fixed IP address, from which it balances traffic between Pods with matching labels. | Internal communication between different components within a cluster | Cannot be used to expose an application inside the cluster to the public internet. | +| **Node Port Service** | • Exposes a specific port on each node of a cluster.
• Forwards external traffic received on that port to the right Pods. | • Exposing single-node, low-traffic clusters to the internet for free
• Testing. | • Not ideal for production or complex clusters
• Single point of failure.
• Not all port numbers can be opened.| +| **Load Balancer Service** | • Creates a single, external Load Balancer with a public IP.
• External LB forwards all traffic to the corresponding LoadBalancer service within the cluster.
• LoadBalancer service then forwards traffic to the right Pods.
• Operates at the L4 level. | • Exposing a service in the cluster to the internet.
• Production envs (highly available).
• Dealing with TCP traffic. | • Each service in the cluster needs its own Load Balancer (can become costly). | +| **Ingress** | • A native resource inside the cluster (not a service).
• Ingress controller receives a single, external public IP, usually in front of a spun-up external HTTP Load Balancer.
• Uses a set of rules to forward web traffic (HTTP(S)) to the correct service out of multiple services within the cluster.
• Each service then sends the traffic to a suitable Pod.
• Operates at the L7 level.| • Clusters with many services.
• Dealing with HTTP(S) traffic. | • Requires an ingress controller (not included by default, must be created).
• Designed for HTTP(S) traffic only (more complicated to configure for other protocols). | Our [webinar](https://www.youtube.com/watch?v=V0uKqYXJRF4) may also be useful to you when considering how to expose your cluster. From 5m47 to 13m43, the different methods to expose a cluster are described and compared. diff --git a/pages/kubernetes/reference-content/introduction-to-kubernetes.mdx b/pages/kubernetes/reference-content/introduction-to-kubernetes.mdx index eeec64ed9b..b1e9886fd6 100644 --- a/pages/kubernetes/reference-content/introduction-to-kubernetes.mdx +++ b/pages/kubernetes/reference-content/introduction-to-kubernetes.mdx @@ -44,7 +44,7 @@ Kubernetes is able to manage a cluster of virtual or physical machines using a s Each machine in a Kubernetes cluster has a given role within the Kubernetes ecosystem. One of these servers acts as the **control plane**, the "brain" of the cluster exposing the different APIs, performing health checks on other servers, scheduling the workloads and orchestrating communication between different components. The control plane acts as the primary point of contact with the cluster. -The other machines in the cluster are called **nodes**. These machines are designed to run workloads in containers, meaning each of them requires a container runtime installed on it (for example, [Docker](/tutorials/install-docker-ubuntu-bionic/) or [CRI-O](https://cri-o.io/)). +The other machines in the cluster are called **nodes**. These machines are designed to run workloads in containers, meaning each of them requires a container runtime installed on it (for example, `containerd`). The different underlying components running in the cluster ensure that the desired state of an application matches the actual state of the cluster. To ensure the desired state of an application, the control plane responds to any changes by performing necessary actions. These actions include creating or destroying containers on the nodes and adjusting network rules to route and forward traffic as directed by the control plane. @@ -68,7 +68,7 @@ The `kube-apiserver` is a component on the control plane that exposes the Kubern #### `kube-scheduler` -The `kube-scheduler` is a control plane component watching newly created pods that have no node assigned yet and assigns them a node to run on. +The `kube-scheduler` is a control plane component watching newly created Pods that have no node assigned yet and assigns them a node to run on. It assigns the node based on individual and collective resource requirements, hardware/software/policy constraints, and more. @@ -88,22 +88,22 @@ It "glues" the different capabilities, features, and APIs of different providers Servers that perform workloads in Kubernetes (running containers) are called **nodes**. Nodes may be VMs or physical machines. -Node components are maintaining pods and providing the Kubernetes runtime environment. These components run on every node in the cluster. +Node components are maintaining Pods and providing the Kubernetes runtime environment. These components run on every node in the cluster. #### `kubelet` -The `kubelet` is an agent running on each node and ensuring that containers are running in a pod. It makes sure that containers described in `PodSpecs` are running and healthy. The agent does not manage any containers that were not created by Kubernetes. +The `kubelet` is an agent running on each node and ensuring that containers are running in a Pod. It makes sure that containers described in `PodSpecs` are running and healthy. The agent does not manage any containers that were not created by Kubernetes. -#### `kube-proxy` +#### `kube-proxy` (optional) -The `kube-proxy` is a network proxy running on each node in the cluster. It maintains the network rules on nodes to allow communication to the pods inside the cluster from internal or external connections. +The `kube-proxy` is a network proxy running on each node in the cluster. It maintains the network rules on nodes to allow communication to the Pods inside the cluster from internal or external connections. `kube-proxy` uses either the packet filtering layer of the operating system, if there is one, or forwards the traffic itself if there is none. -### Container runtime +#### Container runtime Kubernetes is able to manage containers, but is not capable of running them. Therefore, a container runtime is required that is responsible for running containers. -Kubernetes supports several container runtimes like `Docker` or `containerd` as well as any implementation of the [Kubernetes CRI (Container Runtime Interface)](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/container-runtime-interface.md). +Kubernetes Kapsule provides the `containerd` container runtime. ## Kubernetes objects @@ -111,59 +111,63 @@ Kubernetes uses containers to deploy applications, but it also uses additional l ### Pods -A **pod** is the smallest and simplest unit in the Kubernetes object model. Containers are not directly assigned to hosts in Kubernetes. Instead, one or multiple containers that are working closely together are bundled in a pod, sharing a unique network address, storage resources and information on how to govern the containers. +A **Pod** is the smallest and simplest unit in the Kubernetes object model. Containers are not directly assigned to hosts in Kubernetes. Instead, one or multiple containers that are working closely together are bundled in a Pod, sharing a unique network address, storage resources and information on how to govern the containers. ### Services -A **service** is an abstraction which defines a logical group of pods that perform the same function and a policy on how to access them. The service provides a stable endpoint (IP address) and acts like a Load Balancer by redirecting requests to the different pods in the service. The service abstraction allows scaling out or replacing dead pods without making changes in the configuration of an application. +A **Service** is an abstraction which defines a logical group of Pods that perform the same function and a policy on how to access them. The service provides a stable endpoint (IP address) and acts like a Load Balancer by redirecting requests to the different Pods in the service. The service abstraction allows scaling out or replacing dead Pods without making changes in the configuration of an application. By default, services are only available using internally routable IP addresses, but can be exposed publicly. -It can be done either by using the `NodePort` configuration, which works by opening a static port on each node's external networking interface. Otherwise, it is possible to use the `LoadBalancer` service, which creates an external Load Balancer at a cloud provider using Kubernetes load-balancer integration. +It can be done either by using the `NodePort` configuration, which works by opening a static port on each node's external networking interface, or the `LoadBalancer` service, which creates a Scaleway Load Balancer using the Kubernetes load-balancer integration, provided by the CCM. + + + To use `NodePort` with Kubernetes Kapsule or Kosmos, you must configure security groups for Scaleway Instances to allow external connections to the exposed ports of the nodes. + ### ReplicaSet -A **ReplicaSet** contains information about how many pods it can acquire, how many pods it shall maintain, and a pod template specifying the data of new pods to meet the number of replicas criteria. The task of a ReplicaSet is to create and delete pods as needed to reach the desired status. +A `ReplicaSet` contains information about how many Pods it can acquire, how many Pods it shall maintain, and a Pod template specifying the data of new Pods to meet the number of replicas criteria. The task of a ReplicaSet is to create and delete Pods as needed to reach the desired status. -Each pod within a ReplicaSet can be identified via the `metadata.ownerReference` field, allowing the ReplicaSet to know the state of each of them. It can then schedule tasks according to the state of the pods. +Each Pod within a ReplicaSet can be identified via the `metadata.ownerReference` field, allowing the ReplicaSet to know the state of each of them. It can then schedule tasks according to the state of the Pods. -However, `Deployments` are a higher-level concept managing ReplicaSets and providing declarative updates to pods with several useful features. It is therefore recommended to use Deployments unless you require some specific customized orchestration. +However, `Deployments` are a higher-level concept managing ReplicaSets and providing declarative updates to Pods with several useful features. It is therefore recommended to use Deployments unless you require some specific customized orchestration. ### Deployments -A Deployment is representing a set of identical pods with no individual identities, managed by a _deployment controller_. +A `Deployment` in Kubernetes provides declarative updates for applications. It manages `ReplicaSets`, which in turn manage the actual pods. -The deployment controller runs multiple replicas of an application as specified in a _ReplicaSet_. In case any pods fail or become unresponsive, the deployment controller replaces them until the actual state equals the desired state. +The deployment controller continuously ensures that the desired number of pod replicas are running. If pods fail, become unresponsive, or are deleted, it automatically creates replacements to match the desired state. Deployments also support rolling updates and rollbacks, making them the standard way to manage stateless applications. ### StatefulSets -A StatefulSet is able to manage pods like the deployment controller but maintains a sticky identity of each pod. Pods are created from the same base, but are not interchangeable. +A `StatefulSet` manages pods in a similar way to a deployment, but with one crucial difference: each pod has a **persistent identity** and is **not interchangeable**. Pods are created from the same specification, yet each one gets a unique, ordinal-based name that persists even if the pod is rescheduled to a different node. -The operating pattern of StatefulSet is the same as for any other Controllers. The StatefulSet controller maintains the desired state, defined in a StatefulSet object, by making the necessary update to go from the actual state of a cluster to the desired state. +Like other controllers, the StatefulSet controller continuously reconciles the cluster’s actual state with the desired state defined in the StatefulSet object. -The unique, number-based name of each pod in the StatefulSet persists, even if a pod is being moved to another node. +Because pods are treated as unique, each can be associated with its own dedicated storage volume. This makes StatefulSets the preferred choice for workloads that require **stable network identities, persistent storage, and ordered deployment or scaling**, such as databases and distributed systems. ### DaemonSets -Another type of pod controller is called DaemonSet. It ensures that all (or some) nodes run a copy of a pod. For most use cases, it does not matter where pods are running, but in some cases, it is required that a single pod runs on all nodes. This is useful for aggregating log files, collecting metrics, or running a network storage cluster. +Another type of Pod controller is called `DaemonSet`. It ensures that all (or some) nodes run a copy of a Pod. For most use cases, it does not matter where Pods are running, but in some cases, it is required that a single Pod runs on all nodes. This is useful for aggregating log files, collecting metrics, or running a network storage cluster. ### Jobs and CronJobs -Jobs manage a task until it runs to completion. They can run multiple pods in parallel, and are useful for batch-orientated tasks. +Jobs manage a task until it runs to completion. They can run multiple Pods in parallel, and are useful for batch-orientated tasks. CronJobs in Kubernetes work like traditional cron jobs on Linux. They can be used to run tasks at a specific time or interval and may be useful for Jobs such as backups or cleanup tasks. ### Volumes -A Volume is a directory that is accessible to containers in a pod. Kubernetes uses its own volumes' abstraction, allowing data to be shared by all containers and remain available until the pod is terminated. +A Volume is a directory that is accessible to containers in a Pod. Kubernetes uses its own volumes' abstraction, allowing data to be shared by all containers and remain available until the Pod is terminated. -A Kubernetes volume has an explicit lifetime - the same as the pod that encloses it. This means data in a pod will be destroyed when a pod ceases to exist. This also means volumes are not a good solution for storing persistent data. +A Kubernetes volume has an explicit lifetime - the same as the Pod that encloses it. This means data in a Pod will be destroyed when a Pod ceases to exist. This also means volumes are not a good solution for storing persistent data. ### Persistent volumes -To avoid the constraints of the volume life cycle being tied to the pod life cycle, [persistent volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) allow configuring storage resources for a cluster that are independent of the life cycle of a pod. +To avoid the constraints of the volume life cycle being tied to the Pod life cycle, [persistent volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) allow configuring storage resources for a cluster that are independent of the life cycle of a Pod. -Once a pod is being terminated, the reclamation policy of the volume determines if the volume is kept until it gets deleted manually or if it is being terminated with the pod. +Once a Pod is being terminated, the reclamation policy of the volume determines if the volume is kept until it gets deleted manually or if it is being terminated with the Pod. ## Going further diff --git a/pages/kubernetes/reference-content/kubernetes-control-plane-offers.mdx b/pages/kubernetes/reference-content/kubernetes-control-plane-offers.mdx index 52ce33f8fa..96d45c2390 100644 --- a/pages/kubernetes/reference-content/kubernetes-control-plane-offers.mdx +++ b/pages/kubernetes/reference-content/kubernetes-control-plane-offers.mdx @@ -24,7 +24,6 @@ Whether you are seeking a mutualized environment or a dedicated control plane, w | Control plane type /
Features | Mutualized | Dedicated 4 | Dedicated 8 | Dedicated 16 | |--------------------|---------------------------|-------------------|-------------------|--------------------| | Memory | up to 4 GB1 | 4 GB dedicated RAM | 8 GB dedicated RAM | 16 GB dedicated RAM | -| CPU | up to 1vCPU | 2vCPU | 2vCPU | 4vCPU | | API server Availability | 1 resilient replica | 2 replicas for HA | 2 replicas for HA | 2 replicas for HA | | etcd Availability | 3 replicas in multi-AZ | 3 replicas in multi-AZ | 3 replicas in multi-AZ | 3 replicas in multi-AZ | | SLA | N/A | 99.5% uptime | 99.5% uptime | 99.5% uptime | diff --git a/pages/kubernetes/reference-content/kubernetes-load-balancer.mdx b/pages/kubernetes/reference-content/kubernetes-load-balancer.mdx index c7170303f2..c31c8efc07 100644 --- a/pages/kubernetes/reference-content/kubernetes-load-balancer.mdx +++ b/pages/kubernetes/reference-content/kubernetes-load-balancer.mdx @@ -84,7 +84,7 @@ You can refer to the [following example of webserver application to run.](https: - `port`: the new service port that will be created, for connecting to the application - `name`: a name for this port, e.g. `http` - `targetPort`: the application port to target with requests coming from the Service - - `selector`: links the LoadBalancer Service with a set of pods in the cluster. Ensure that the `app` specified matches the name of the deployment of your app in the cluster (run `kubectl get all` if necessary to check the name). + - `selector`: links the LoadBalancer Service with a set of Pods in the cluster. Ensure that the `app` specified matches the name of the deployment of your app in the cluster (run `kubectl get all` if necessary to check the name). 2. Use the command `kubectl create -f .yaml` to tell the Kubernetes Cloud Controller to create the Load Balancer from the manifest in the default namespace. diff --git a/pages/kubernetes/reference-content/migrate-end-of-life-pools-to-newer-instances.mdx b/pages/kubernetes/reference-content/migrate-end-of-life-pools-to-newer-instances.mdx index 149ae1edd2..13b539c016 100644 --- a/pages/kubernetes/reference-content/migrate-end-of-life-pools-to-newer-instances.mdx +++ b/pages/kubernetes/reference-content/migrate-end-of-life-pools-to-newer-instances.mdx @@ -50,7 +50,7 @@ This guide outlines the recommended steps to migrate your Kubernetes Kapsule clu ## Migrating workloads to the new pool -1. [**Cordon**](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_cordon/) the deprecated nodes to prevent them from receiving new pods: +1. [**Cordon**](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_cordon/) the deprecated nodes to prevent them from receiving new Pods: ```bash kubectl cordon ``` diff --git a/pages/kubernetes/reference-content/modifying-kernel-parameters-kubernetes-cluster.mdx b/pages/kubernetes/reference-content/modifying-kernel-parameters-kubernetes-cluster.mdx index d6b0ce3506..08353536fb 100644 --- a/pages/kubernetes/reference-content/modifying-kernel-parameters-kubernetes-cluster.mdx +++ b/pages/kubernetes/reference-content/modifying-kernel-parameters-kubernetes-cluster.mdx @@ -35,7 +35,7 @@ Kernel parameters, managed via the `sysctl` command, are grouped into different ## Creating a DaemonSet to modify kernel parameters -To apply kernel parameter changes across all nodes in the cluster, you can create a Kubernetes DaemonSet that runs privileged pods. This will ensure the changes are applied to every node. +To apply kernel parameter changes across all nodes in the cluster, you can create a Kubernetes DaemonSet that runs privileged Pods. This will ensure the changes are applied to every node. Create a YAML file (e.g., `sysctl-daemonset.yaml`), copy/paste the following content into the file, save it and exit the text editor: @@ -70,12 +70,12 @@ spec: securityContext: privileged: true # Privileged access to modify sysctl settings on the host containers: - - name: sleep-container # Main container to keep the pod running + - name: sleep-container # Main container to keep the Pod running image: busybox:latest command: - /bin/sh - -c - - sleep infinity # Keep the pod alive indefinitely + - sleep infinity # Keep the Pod alive indefinitely ``` ## Applying the DaemonSet diff --git a/pages/kubernetes/reference-content/multi-az-clusters.mdx b/pages/kubernetes/reference-content/multi-az-clusters.mdx index 026db88512..b23fb23fad 100644 --- a/pages/kubernetes/reference-content/multi-az-clusters.mdx +++ b/pages/kubernetes/reference-content/multi-az-clusters.mdx @@ -31,7 +31,7 @@ The main advantages of running a Kubernetes Kapsule cluster in multiple AZs are: - We recommend configuring your cluster with at least three nodes spread across at least two different AZs for better reliability and data resiliency. - Automatically replicate persistent data and storage volumes across multiple AZs to prevent data loss and ensure seamless application performance, even if one zone experiences issues. -- Use [topology spread constraints](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/) to distribute pods evenly across different AZs, enhancing the overall availability and resilience of your applications by preventing single points of failure. +- Use [topology spread constraints](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/) to distribute Pods evenly across different AZs, enhancing the overall availability and resilience of your applications by preventing single points of failure. - Ensure your load balancers are zone-aware to distribute traffic efficiently across nodes in different AZs, preventing overloading a single zone. For more information, refer to the [official Kubernetes best practices for running clusters in multiple zones](https://kubernetes.io/docs/setup/best-practices/multiple-zones/) documentation. @@ -140,7 +140,7 @@ After applying this Terraform/OpenTofu configuration, the cluster and node pools ### Deployments with topologySpreadConstraints -`topologySpreadConstraints` allow for fine control over how pods are spread across your Kubernetes cluster among failure-domains such as regions, zones, nodes, and other user-defined topology domains. +`topologySpreadConstraints` allow for fine control over how Pods are spread across your Kubernetes cluster among failure-domains such as regions, zones, nodes, and other user-defined topology domains. This approach ensures high availability and resiliency. For more information, refer to the [official Kubernetes Pod Topology Spread Constraints documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/). @@ -174,7 +174,7 @@ spec: #... other settings ``` -In this example, `maxSkew` describes the maximum difference between the number of matching pods in any two topology domains of a given topology type. The `topologyKey` specifies a key for node labels. For spreading the pods evenly across zones, use `topology.kubernetes.io/zone`. +In this example, `maxSkew` describes the maximum difference between the number of matching Pods in any two topology domains of a given topology type. The `topologyKey` specifies a key for node labels. For spreading the Pods evenly across zones, use `topology.kubernetes.io/zone`. ### Service with scw-loadbalancer-zone annotation @@ -197,7 +197,7 @@ spec: type: LoadBalancer ``` -This service definition creates a load balancer in the "fr-par-1" zone and directs traffic to pods with the `resilient-app` label. Learn more about LoadBalancer annotations with our [dedicated Scaleway LoadBalancer Annotations](https://github.com/scaleway/scaleway-cloud-controller-manager/blob/master/docs/loadbalancer-annotations.md) documentation. +This service definition creates a load balancer in the "fr-par-1" zone and directs traffic to Pods with the `resilient-app` label. Learn more about LoadBalancer annotations with our [dedicated Scaleway LoadBalancer Annotations](https://github.com/scaleway/scaleway-cloud-controller-manager/blob/master/docs/loadbalancer-annotations.md) documentation. * Cluster spread over three Availability Zones @@ -254,9 +254,9 @@ scw-bssd-retain csi.scaleway.com Retain WaitForFirstConsumer t This upgrade process will automatically modify the storage class to the desired `WaitForFirstConsumer` binding mode.
-Using a storage class with `volumeBindingMode` set to `WaitForFirstConsumer` is a requirement when deploying applications across multiple AZs, especially those that rely on persistent volumes. This configuration ensures that volume creation is contingent on the pod's scheduling, aligning with specific AZ prerequisites. +Using a storage class with `volumeBindingMode` set to `WaitForFirstConsumer` is a requirement when deploying applications across multiple AZs, especially those that rely on persistent volumes. This configuration ensures that volume creation is contingent on the Pod's scheduling, aligning with specific AZ prerequisites. -Creating a volume ahead of this could lead to its arbitrary placement in an AZ, which can cause attachment issues if the pod is subsequently scheduled in a different AZ. The `WaitForFirstConsumer` mode ensures that volumes are instantiated in the same AZ as their corresponding node, ensuring distribution across various AZs as pods are allocated. +Creating a volume ahead of this could lead to its arbitrary placement in an AZ, which can cause attachment issues if the Pod is subsequently scheduled in a different AZ. The `WaitForFirstConsumer` mode ensures that volumes are instantiated in the same AZ as their corresponding node, ensuring distribution across various AZs as Pods are allocated. This method is an important point to maintain system resilience and operational consistency across multi-AZ deployments. diff --git a/pages/kubernetes/reference-content/secure-cluster-with-private-network.mdx b/pages/kubernetes/reference-content/secure-cluster-with-private-network.mdx index ecb17cc3b0..6a4f343c86 100644 --- a/pages/kubernetes/reference-content/secure-cluster-with-private-network.mdx +++ b/pages/kubernetes/reference-content/secure-cluster-with-private-network.mdx @@ -69,7 +69,7 @@ Keep in mind that removing or detaching the Public Gateway from the Private Netw Only Kapsule can use a Private Network. -Kosmos uses Kilo as a CNI, which uses WireGuard to create a VPN Mesh between nodes for communication between pods. Any node in Kosmos, either in Scaleway or outside, uses these VPN tunnels to communicate securely by construct. +Kosmos uses Kilo as a CNI, which uses WireGuard to create a VPN Mesh between nodes for communication between Pods. Any node in Kosmos, either in Scaleway or outside, uses these VPN tunnels to communicate securely by construct. ### Are Managed Databases compatible with Kubernetes Kapsule on Private Networks? diff --git a/pages/kubernetes/reference-content/set-iam-permissions-and-implement-rbac.mdx b/pages/kubernetes/reference-content/set-iam-permissions-and-implement-rbac.mdx index c020041fd2..acf168c083 100644 --- a/pages/kubernetes/reference-content/set-iam-permissions-and-implement-rbac.mdx +++ b/pages/kubernetes/reference-content/set-iam-permissions-and-implement-rbac.mdx @@ -15,7 +15,7 @@ It allows you to assign roles to users, groups or `ServicesAccount` via `RoleBin Key components of RBAC in Kubernetes include: - **Roles and ClusterRoles:** - - `Roles`: These are specific to a namespace, and define a set of permissions for resources within that namespace (e.g., pods, services). + - `Roles`: These are specific to a namespace, and define a set of permissions for resources within that namespace (e.g., Pods, Services). - `ClusterRoles`: These are similar to roles but apply cluster-wide, spanning all namespaces. - **RoleBindings and ClusterRoleBindings:** - `RoleBindings`: These associate a set of permissions defined in a role with a user, group, or service account within a specific namespace. diff --git a/pages/kubernetes/reference-content/understanding-differences-kapsule-kosmos.mdx b/pages/kubernetes/reference-content/understanding-differences-kapsule-kosmos.mdx index 71a7f5b1b7..fcc9943bbc 100644 --- a/pages/kubernetes/reference-content/understanding-differences-kapsule-kosmos.mdx +++ b/pages/kubernetes/reference-content/understanding-differences-kapsule-kosmos.mdx @@ -44,7 +44,10 @@ Kosmos is Scaleway's **multi-cloud Kubernetes solution**, designed to operate ac | Auto healing | ✔️ | Scaleway Instances only | | Auto scaling | ✔️ | Scaleway Instances only | | Container Storage Interface | ✔️ Persistent volumes (Block Storage) on Scaleway Instances | Scaleway Instances only | +| Free mutualized control plane | ✔️ | ✔️ | | Dedicated control plane options | ✔️ | ✔️ | | Scaleway VPC | ✔️ Controlled isolation or full isolation | ✘ No integration | | Scaleway Cockpit | ✔️ | ✔️ | -| Node pools upgrades | Handled by Kapsule | *Internal pools*: Handled by Kapsule
*External pools*: Must be carried out manually per node | \ No newline at end of file +| Node pools upgrades | Handled by Kapsule | *Internal pools*: Handled by Kapsule
*External pools*: Must be carried out manually per node | + +AZ = [Availability Zone](/account/reference-content/products-availability/) \ No newline at end of file diff --git a/pages/kubernetes/reference-content/using-load-balancer-annotations.mdx b/pages/kubernetes/reference-content/using-load-balancer-annotations.mdx index efc9dc6464..bd0a404a44 100644 --- a/pages/kubernetes/reference-content/using-load-balancer-annotations.mdx +++ b/pages/kubernetes/reference-content/using-load-balancer-annotations.mdx @@ -9,7 +9,7 @@ dates: ## Overview -In Kubernetes, annotations are a way to attach metadata to objects, like pods, services, and more. These annotations are key-value pairs that can be useful for tools and libraries interacting with these objects. These annotations are used to influence the behavior, settings, or configurations of the provisioned load balancer resource. +In Kubernetes, annotations are a way to attach metadata to objects, like Pods, services, and more. These annotations are key-value pairs that can be useful for tools and libraries interacting with these objects. These annotations are used to influence the behavior, settings, or configurations of the provisioned load balancer resource. When you [create a Load Balancer](/kubernetes/reference-content/kubernetes-load-balancer/) for your Kubernetes cluster, it will be created with a default configuration, unless you define its configuration parameters via **annotations**. Load Balancer annotations let you configure parameters such as the balancing method health check settings and more. diff --git a/pages/kubernetes/reference-content/wildcard-dns.mdx b/pages/kubernetes/reference-content/wildcard-dns.mdx index 47d83eeb34..1b21dcb187 100644 --- a/pages/kubernetes/reference-content/wildcard-dns.mdx +++ b/pages/kubernetes/reference-content/wildcard-dns.mdx @@ -78,7 +78,7 @@ Use Helm to deploy the NGINX ingress controller with a `NodePort` service and a nodePorts: http: 30080 https: 30443 - # Node selector to ensure pods run on nodes with public IPs + # Node selector to ensure Pods run on nodes with public IPs nodeSelector: scaleway.com/pool-name: config: @@ -89,7 +89,7 @@ Use Helm to deploy the NGINX ingress controller with a `NodePort` service and a ``` - `type: `NodePort`` exposes the ingress controller on the specified ports (e.g., 30080 for HTTP, 30443 for HTTPS) on each node’s public IP. - - The `nodeSelector` ensures the ingress controller pods run on nodes in the `` pool, which have public IPs. Replace `scaleway.com/pool-name: ` if your public IP pool has a different name (check via `kubectl get nodes -o wide`). + - The `nodeSelector` ensures the ingress controller Pods run on nodes in the `` pool, which have public IPs. Replace `scaleway.com/pool-name: ` if your public IP pool has a different name (check via `kubectl get nodes -o wide`). - Full isolation node pools lack public IPs, making them incompatible with `NodePort`-based ingress unless routed through nodes with public IPs (e.g., default pool). - `admissionWebhooks.enabled: true` ensures the validating webhook is enabled for ingress resource validation. diff --git a/pages/kubernetes/videos.mdx b/pages/kubernetes/videos.mdx index 165c9b1164..abb3528f16 100644 --- a/pages/kubernetes/videos.mdx +++ b/pages/kubernetes/videos.mdx @@ -29,7 +29,7 @@ This is the second in a series of practical video tutorials to help users get st In this video, we show you how to deploy a containerized application with the Scaleway Kubernetes Kapsule. -First, we review some key Kubernetes terminology (including pools, nodes, and pods) and then demonstrate how to create a Kubernetes Kapsule via the Scaleway console. +First, we review some key Kubernetes terminology (including pools, nodes, and Pods) and then demonstrate how to create a Kubernetes Kapsule via the Scaleway console. Next, we show you how to install kubectl, so you can connect to your cluster from the command line of your local machine, and how to create an Image Pull Secret for your cluster. @@ -73,7 +73,7 @@ First, we recap what we did in the previous three videos, and explore some key c Next, we show you how to create a StatefulSet object with its own Scaleway Block Storage volumes, using a yaml manifest and kubectl. The CSI takes care of provisioning the storage from Scaleway. -We then demonstrate how to view your newly created Block Storage volumes in the console, and witness how the volumes can reattach themselves to the pods of the StatelessSet, even if the pods are temporarily destroyed. +We then demonstrate how to view your newly created Block Storage volumes in the console, and witness how the volumes can reattach themselves to the Pods of the StatelessSet, even if the Pods are temporarily destroyed.