Skip to content

Commit 93dda5d

Browse files
committed
docs(k8s): update wording
1 parent 9eb61e7 commit 93dda5d

File tree

25 files changed

+73
-73
lines changed

25 files changed

+73
-73
lines changed

changelog/august2025/2025-08-27-kubernetes-added-dns-service-ip-and-pod-and-service-cid.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,6 @@ product: kubernetes
88

99
The following parameters can be set when creating a cluster:
1010
- `service-dns-ip`: IP used for the DNS Service (cannot be changed later)
11-
- `Pod-cidr`: Subnet used for the Pod CIDR (cannot be changed later)
11+
- `pod-cidr`: Subnet used for the Pod CIDR (cannot be changed later)
1212
- `service-cidr`: Subnet used for the Service CIDR (cannot be changed later)
1313

pages/cockpit/api-cli/querying-logs-with-logcli.mdx

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -74,11 +74,11 @@ An output similar to the following should display:
7474
2024-05-22T17:33:04+02:00 {component="kapsule-autoscaler"} I0522 1 pre_filtering_processor.go:57] Node scw-k8s-sharp-robinson-default-7cefec16593342e should not be processed by cluster autoscaler (no node group config)
7575
2024-05-22T17:33:04+02:00 {component="kapsule-autoscaler"} I0522 1 pre_filtering_processor.go:57] Node scw-k8s-sharp-robinson-default-bfb90f82c4b949c should not be processed by cluster autoscaler (no node group config)
7676
2024-05-22T17:33:04+02:00 {component="kapsule-autoscaler"} I0522 1 static_autoscaler.go:492] Calculating unneeded nodes
77-
2024-05-22T17:33:04+02:00 {component="kapsule-autoscaler"} I0522 1 static_autoscaler.go:445] No unschedulable Pods
78-
2024-05-22T17:33:04+02:00 {component="kapsule-autoscaler"} I0522 1 filter_out_schedulable.go:87] No schedulable Pods
79-
2024-05-22T17:33:04+02:00 {component="kapsule-autoscaler"} I0522 1 filter_out_schedulable.go:177] 0 Pods marked as unschedulable can be scheduled.
80-
2024-05-22T17:33:04+02:00 {component="kapsule-autoscaler"} I0522 1 filter_out_schedulable.go:176] 0 Pods were kept as unschedulable based on caching
81-
2024-05-22T17:33:04+02:00 {component="kapsule-autoscaler"} I0522 1 filter_out_schedulable.go:137] Filtered out 0 Pods using hints
77+
2024-05-22T17:33:04+02:00 {component="kapsule-autoscaler"} I0522 1 static_autoscaler.go:445] No unschedulable pods
78+
2024-05-22T17:33:04+02:00 {component="kapsule-autoscaler"} I0522 1 filter_out_schedulable.go:87] No schedulable pods
79+
2024-05-22T17:33:04+02:00 {component="kapsule-autoscaler"} I0522 1 filter_out_schedulable.go:177] 0 pods marked as unschedulable can be scheduled.
80+
2024-05-22T17:33:04+02:00 {component="kapsule-autoscaler"} I0522 1 filter_out_schedulable.go:176] 0 pods were kept as unschedulable based on caching
81+
2024-05-22T17:33:04+02:00 {component="kapsule-autoscaler"} I0522 1 filter_out_schedulable.go:137] Filtered out 0 pods using hints
8282
```
8383

8484
<Message type="tip">

pages/dedibox-kvm-over-ip/how-to/dell-idrac6.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ This page shows you how to use [KVM](/dedibox-kvm-over-ip/concepts/#kvm-over-ip)
2020
<Requirements />
2121

2222
- A Dedibox account logged into the [console](https://console.online.net)
23-
- Installed [Podman](https://Podman.io/getting-started/installation) on your machine
23+
- Installed [Podman](https://podman.io/getting-started/installation) on your machine
2424
- Installed [Java](https://www.java.com/en/download/help/download_options.html) on your local computer
2525
- A Dedibox server with a Dell iDRAC 6 IPMI interface
2626

pages/gpu/how-to/use-mig-with-kubernetes.mdx

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ In this guide, we will explore the capabilities of NVIDIA MIG within a Kubernete
3434

3535
1. Find the name of the Pods running the Nvidia Driver:
3636
```
37-
% kubectl get Pods -n kube-system
37+
% kubectl get pods -n kube-system
3838
NAME READY STATUS RESTARTS AGE
3939
cilium-operator-fbff794f4-kff42 1/1 Running 0 4h13m
4040
cilium-sfkgc 1/1 Running 0 4h12m
@@ -324,14 +324,14 @@ In this guide, we will explore the capabilities of NVIDIA MIG within a Kubernete
324324
2. Deploy the Pods:
325325
```
326326
% kubectl create -f deploy-mig.yaml
327-
Pod/test-1 created
328-
Pod/test-2 created
329-
Pod/test-3 created
330-
Pod/test-4 created
331-
Pod/test-5 created
332-
Pod/test-6 created
333-
Pod/test-7 created
334-
Pod/test-8 created
327+
pod/test-1 created
328+
pod/test-2 created
329+
pod/test-3 created
330+
pod/test-4 created
331+
pod/test-5 created
332+
pod/test-6 created
333+
pod/test-7 created
334+
pod/test-8 created
335335
```
336336

337337
3. Display the logs of the Pods. The Pods print their UUID with the `nvidia-smi` command:

pages/kubernetes/api-cli/cluster-monitoring.mdx

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -62,13 +62,13 @@ Deploy the Prometheus stack in a dedicated Kubernetes [namespace](https://kubern
6262
```
6363
4. Verify that the created Pods are all running once the stack is deployed. You can also check whether the 100Gi block volume was created:
6464
```bash
65-
kubectl get Pods,pv,pvc -n monitoring
65+
kubectl get pods,pv,pvc -n monitoring
6666
NAME READY STATUS RESTARTS AGE
67-
Pod/prometheus-alertmanager-6565668c85-5vdxc 2/2 Running 0 67s
68-
Pod/prometheus-kube-state-metrics-6756bbbb8-6qs9r 1/1 Running 0 67s
69-
Pod/prometheus-node-exporter-fbg6s 1/1 Running 0 67s
70-
Pod/prometheus-pushgateway-6d75c59b7b-6knfd 1/1 Running 0 67s
71-
Pod/prometheus-server-556dbfdfb5-rx6nl 1/2 Running 0 67s
67+
pod/prometheus-alertmanager-6565668c85-5vdxc 2/2 Running 0 67s
68+
pod/prometheus-kube-state-metrics-6756bbbb8-6qs9r 1/1 Running 0 67s
69+
pod/prometheus-node-exporter-fbg6s 1/1 Running 0 67s
70+
pod/prometheus-pushgateway-6d75c59b7b-6knfd 1/1 Running 0 67s
71+
pod/prometheus-server-556dbfdfb5-rx6nl 1/2 Running 0 67s
7272

7373
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
7474
persistentvolume/pvc-5a9def3b-22a1-4545-9adb-72823b899c36 100Gi RWO Delete Bound monitoring/prometheus-server scw-bssd 67s
@@ -80,8 +80,8 @@ Deploy the Prometheus stack in a dedicated Kubernetes [namespace](https://kubern
8080
```
8181
5. To access Prometheus use the Kubernetes [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) feature:
8282
```bash
83-
export Pod_NAME=$(kubectl get Pods --namespace monitoring -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
84-
kubectl --namespace monitoring port-forward $Pod_NAME 9090
83+
export pod_NAME=$(kubectl get pods --namespace monitoring -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
84+
kubectl --namespace monitoring port-forward $pod_NAME 9090
8585
```
8686
6. Access the Prometheus dashboard using the following URL: [http://localhost:9090](http://localhost:9090)
8787
<Lightbox image={image} alt="" />
@@ -200,7 +200,7 @@ The `loki` application is not included in the default Helm repositories.
200200
```
201201
5. Now that both Loki and Grafana are installed in the cluster, check if the Pods are correctly running:
202202
```bash
203-
kubectl get Pods -n loki-stack
203+
kubectl get pods -n loki-stack
204204

205205
NAME READY STATUS RESTARTS AGE
206206
loki-grafana-67994589cc-7jq4t 0/1 Running 0 74s

pages/kubernetes/how-to/deploy-image-from-container-registry.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -157,9 +157,9 @@ To deploy the previously created container image in a Kapsule cluster, you need
157157
```bash
158158
kubectl apply -f deployment.yaml
159159
```
160-
4. Use the `kubectl get Pods` command to check the status of the deployment:
160+
4. Use the `kubectl get pods` command to check the status of the deployment:
161161
```bash
162-
kubectl get Pods
162+
kubectl get pods
163163
```
164164

165165
```
@@ -170,6 +170,6 @@ To deploy the previously created container image in a Kapsule cluster, you need
170170

171171
As you can see in the output above, the image has been pulled successfully from the registry and two replicas of it are running on the Kapsule cluster.
172172

173-
For more information how to use your Container Registry with Kubernetes, refer to the [official documentation](https://kubernetes.io/docs/tasks/configure-Pod-container/pull-image-private-registry/).
173+
For more information how to use your Container Registry with Kubernetes, refer to the [official documentation](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/).
174174

175175

pages/kubernetes/how-to/deploy-x86-arm-images.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ These images contain binaries for multiple architectures, allowing Kubernetes to
2727

2828
1. Build multi-arch images. Docker supports multi-arch builds using `buildx`.
2929
2. Push the built images to a container registry accessible by your Kubernetes cluster. For example, you can use the [Scaleway Container Registry](/container-registry/quickstart/).
30-
3. Specify node selectors and affinity. Use either [node selectors](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-Pod-node/#nodeselector) and [affinity rules](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-Pod-node/#affinity-and-anti-affinity) to ensure Pods are scheduled on nodes with compatible architectures.
30+
3. Specify node selectors and affinity. Use either [node selectors](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector) and [affinity rules](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity) to ensure Pods are scheduled on nodes with compatible architectures.
3131
<Message type="tip">
3232
Alternatively, use taints to mark nodes with specific architectures and tolerations to allow Pods to run on those nodes. Refer to the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) for more information regarding taints and tolerations.
3333
</Message>
@@ -39,7 +39,7 @@ Below, you can find an example of a Pod configuration with affinity set to targe
3939
apiVersion: v1
4040
kind: Pod
4141
metadata:
42-
name: example-Pod-with-affinity
42+
name: example-pod-with-affinity
4343
spec:
4444
affinity:
4545
nodeAffinity:

pages/kubernetes/how-to/manage-node-pools.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,7 @@ This documentation provides step-by-step instructions on how to manage Kubernete
7777
- The `--ignore-daemonsets` flag is used because daemon sets manage Pods across all nodes and will automatically reschedule them.
7878
- The `--delete-emptydir-data` flag is necessary if your Pods use emptyDir volumes, but use this option carefully as it will delete the data stored in these volumes.
7979
- Refer to the [official Kubernetes documentation](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/) for further information.
80-
5. Run `kubectl get Pods -o wide` after draining, to verify that the Pods have been rescheduled to the new node pool.
80+
5. Run `kubectl get pods -o wide` after draining, to verify that the Pods have been rescheduled to the new node pool.
8181
6. [Delete the old node pool](#how-to-delete-an-existing-kubernetes-kapsule-node-pool) once you confirm that all workloads are running smoothly on the new node pool.
8282

8383
## How to delete an existing Kubernetes Kapsule node pool

pages/kubernetes/how-to/monitor-data-plane-with-cockpit.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -155,11 +155,11 @@ Key points include:
155155
- **No logs appearing** in Cockpit:
156156
- Verify that the Promtail Pod is running.
157157
```bash
158-
kubectl get Pods -n <promtail-namespace>
158+
kubectl get pods -n <promtail-namespace>
159159
```
160160
- Inspect Promtail logs for errors.
161161
```bash
162-
kubectl logs <promtail-Pod-name> -n <promtail-namespace>
162+
kubectl logs <promtail-pod-name> -n <promtail-namespace>
163163
```
164164

165165
- **High log ingestion cost**:

tutorials/ceph-cluster/index.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ This tutorial guides you through deploying a three-node Ceph cluster with a RADO
3333
2. Install software dependencies on all nodes and the admin machine:
3434
```
3535
sudo apt update
36-
sudo apt install -y python3 chrony lvm2 Podman
36+
sudo apt install -y python3 chrony lvm2 podman
3737
sudo systemctl enable chrony
3838
```
3939

0 commit comments

Comments
 (0)