Skip to content

Commit 212d70e

Browse files
bene2k1nox-404
andauthored
Apply suggestions from code review
Co-authored-by: Nox <[email protected]>
1 parent c76305d commit 212d70e

File tree

13 files changed

+37
-37
lines changed

13 files changed

+37
-37
lines changed

pages/cockpit/how-to/configure-alerts-for-scw-resources.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -110,10 +110,10 @@ Switch between the tabs below to create alerts for a Scaleway Instance, an Objec
110110

111111
1. In the query field next to the **Loading metrics... >** button, paste the following query. Make sure that the values for the labels you have selected (for example, `resource_name`) correspond to those of the target resource.
112112
```bash
113-
rate(kubernetes_cluster_k8s_shoot_nodes_Pods_usage_total{resource_name="k8s-par-quizzical-chatelet"}[15m]) == 0
113+
rate(kubernetes_cluster_k8s_shoot_nodes_pods_usage_total{resource_name="k8s-par-quizzical-chatelet"}[15m]) == 0
114114
```
115115
<Message type="tip">
116-
The `kubernetes_cluster_k8s_shoot_nodes_Pods_usage_total` metric represents the total number of Pods currently running across all nodes in your Kubernetes cluster. It is helpful to monitor current Pod consumption per node pool or cluster, and help track resource saturation or unexpected workload spikes.
116+
The `kubernetes_cluster_k8s_shoot_nodes_pods_usage_total` metric represents the total number of Pods currently running across all nodes in your Kubernetes cluster. It is helpful to monitor current Pod consumption per node pool or cluster, and help track resource saturation or unexpected workload spikes.
117117
</Message>
118118
2. In the **Set alert evaluation behavior** field, specify how long the condition must be true before triggering the alert.
119119
3. Enter a name in the **Namespace** and **Group** fields to categorize and manage your alert rules. Rules that share the same group will use the same configuration, including the evaluation interval which determines how often the rule is evaluated (by default: every 1 minute). You can modify this interval later in the group settings.

pages/cockpit/how-to/send-logs-from-k8s-to-cockpit.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@ nodeLogs:
5858
enabled: true
5959
destinations: ["my-cockpit-logs"]
6060
# -- Pod logs.
61-
PodLogs:
61+
podLogs:
6262
enabled: true
6363
destinations: ["my-cockpit-logs"]
6464
volumeGatherSettings:

pages/gpu/how-to/use-mig-with-kubernetes.mdx

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -359,14 +359,14 @@ In this guide, we will explore the capabilities of NVIDIA MIG within a Kubernete
359359
4. Clean the deployment:
360360
```
361361
% kubectl delete -f deploy-mig.yaml
362-
Pod "test-1" deleted
363-
Pod "test-2" deleted
364-
Pod "test-3" deleted
365-
Pod "test-4" deleted
366-
Pod "test-5" deleted
367-
Pod "test-6" deleted
368-
Pod "test-7" deleted
369-
Pod "test-8" deleted
362+
pod "test-1" deleted
363+
pod "test-2" deleted
364+
pod "test-3" deleted
365+
pod "test-4" deleted
366+
pod "test-5" deleted
367+
pod "test-6" deleted
368+
pod "test-7" deleted
369+
pod "test-8" deleted
370370
```
371371

372372
## Disable MIG inside a Kubernetes cluster

pages/kubernetes/concepts.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: Kubernetes - Concepts
33
description: This page explains all the concepts related to Kubernetes Kapsule
4-
tags: kubernetes autoheal autoscale auto-upgrade container-runtime cni control-plane kubeconfi kubectl Pods replicaset
4+
tags: kubernetes autoheal autoscale auto-upgrade container-runtime cni control-plane kubeconfi kubectl pods replicaset
55
dates:
66
validation: 2025-09-03
77
---

pages/kubernetes/how-to/monitor-data-plane-with-cockpit.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ You can use Scaleway’s **[Easy Deploy](/kubernetes/how-to/enable-easy-deploy/)
6969
<Message type="note">
7070
You can edit the default deployment configuration to filter logs by source (under `config.snippets.scrapeConfigs` in the YAML file). For example:
7171
```yaml
72-
cockpit_promtail_scrape_config_Pods: "namespace1,namespace2"
72+
cockpit_promtail_scrape_config_pods: "namespace1,namespace2"
7373
cockpit_promtail_scrape_config_journal: "kubelet.service,kube-proxy.service"
7474
```
7575
</Message>
@@ -84,7 +84,7 @@ config:
8484

8585
snippets:
8686
scrapeConfigs: |
87-
{{{- cockpit_promtail_scrape_config_Pods }}} # Default: log all Pods
87+
{{{- cockpit_promtail_scrape_config_pods }}} # Default: log all Pods
8888
{{{- cockpit_promtail_scrape_config_journal }}} # Default: log all system components
8989
extraVolumeMounts:
9090
- mountPath: /var/log/journal
@@ -109,7 +109,7 @@ Once Promtail is running:
109109
4. **Filter Pod logs** by:
110110
- `Datasource` which is automatically created upon deployment and visible in the Cockpit console
111111
- `Cluster Name` ( e.g. `my-kapsule-cluster`)
112-
- `namespace`, `Pod`, or `container` labels to isolate specific workloads
112+
- `namespace`, `pod`, or `container` labels to isolate specific workloads
113113
- `Time range` to limit how far back in history you want to query
114114
5. Alternatively, in Grafana's menu, navigate to **Dashboards** and select **Kubernetes Cluster Node Logs** to view system logs collected from nodes in your clusters.
115115
6. **Filter node logs** by:

pages/kubernetes/how-to/recover-space-etcd.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ Kind: endpoints, Size: 82063
2727
Kind: events, Size: 375065
2828
Kind: limitranges, Size: 68
2929
Kind: persistentvolumeclaims, Size: 68
30-
Kind: Pods, Size: 3326153
30+
Kind: pods, Size: 3326153
3131
[...]
3232
```
3333

pages/kubernetes/reference-content/introduction-to-kubernetes.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -103,7 +103,7 @@ The `kube-proxy` is a network proxy running on each node in the cluster. It main
103103
#### Container runtime
104104

105105
Kubernetes is able to manage containers, but is not capable of running them. Therefore, a container runtime is required that is responsible for running containers.
106-
Kubernetes Kapsule supports the `containerd` container runtimes as well as any implementation of the [Kubernetes CRI (Container Runtime Interface)](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/container-runtime-interface.md).
106+
Kubernetes Kapsule provides the `containerd` container runtime.
107107

108108
## Kubernetes objects
109109

@@ -115,7 +115,7 @@ A **Pod** is the smallest and simplest unit in the Kubernetes object model. Cont
115115

116116
### Services
117117

118-
A **service** is an abstraction which defines a logical group of Pods that perform the same function and a policy on how to access them. The service provides a stable endpoint (IP address) and acts like a Load Balancer by redirecting requests to the different Pods in the service. The service abstraction allows scaling out or replacing dead Pods without making changes in the configuration of an application.
118+
A **Service** is an abstraction which defines a logical group of Pods that perform the same function and a policy on how to access them. The service provides a stable endpoint (IP address) and acts like a Load Balancer by redirecting requests to the different Pods in the service. The service abstraction allows scaling out or replacing dead Pods without making changes in the configuration of an application.
119119

120120
By default, services are only available using internally routable IP addresses, but can be exposed publicly.
121121

pages/kubernetes/reference-content/kubernetes-load-balancer.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -156,10 +156,10 @@ For full details on how to use Load Balancer annotations when creating your Load
156156
If your Load Balancer is configured for [SSL offload](/load-balancer/reference-content/ssl-bridging-offloading-passthrough/#configuring-a-load-balancer-for-ssl-offloading), and you have several services behind the same Load Balancer, you will encounter an SSL error when trying to reach the other services using HTTPS.
157157

158158
Example:
159-
- PodA
160-
- PodB (`https://serviceB.example.com`)
159+
- podA
160+
- podB (`https://serviceB.example.com`)
161161

162-
From PodA, a curl to `https://serviceB.example.com` will result in an SSL error.
162+
From podA, a curl to `https://serviceB.example.com` will result in an SSL error.
163163

164164
To force the Load Balancer to handle all requests using the SSL offload, enable the [use hostname annotations](https://github.com/scaleway/scaleway-cloud-controller-manager/blob/master/docs/loadbalancer-annotations.md#servicebetakubernetesioscw-loadbalancer-use-hostname) on your Load Balancer service.
165165

pages/kubernetes/reference-content/set-iam-permissions-and-implement-rbac.mdx

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ It allows you to assign roles to users, groups or `ServicesAccount` via `RoleBin
1515
Key components of RBAC in Kubernetes include:
1616

1717
- **Roles and ClusterRoles:**
18-
- `Roles`: These are specific to a namespace, and define a set of permissions for resources within that namespace (e.g., Pods, services).
18+
- `Roles`: These are specific to a namespace, and define a set of permissions for resources within that namespace (e.g., Pods, Services).
1919
- `ClusterRoles`: These are similar to roles but apply cluster-wide, spanning all namespaces.
2020
- **RoleBindings and ClusterRoleBindings:**
2121
- `RoleBindings`: These associate a set of permissions defined in a role with a user, group, or service account within a specific namespace.
@@ -238,9 +238,9 @@ rules:
238238
- nodes
239239
- persistentvolumeclaims
240240
- persistentvolumeclaims/status
241-
- Pods
242-
- Pods/log
243-
- Pods/status
241+
- pods
242+
- pods/log
243+
- pods/status
244244
- replicationcontrollers
245245
- replicationcontrollers/scale
246246
- replicationcontrollers/status
@@ -258,7 +258,7 @@ rules:
258258
apiGroups:
259259
- metrics.k8s.io
260260
resources:
261-
- Pods
261+
- pods
262262
- nodes
263263

264264

@@ -290,8 +290,8 @@ rules:
290290
apiGroups:
291291
- autoscaling
292292
resources:
293-
- horizontalPodautoscalers
294-
- horizontalPodautoscalers/status
293+
- horizontalpodautoscalers
294+
- horizontalpodautoscalers/status
295295

296296

297297
- verbs:
@@ -335,8 +335,8 @@ rules:
335335
apiGroups:
336336
- policy
337337
resources:
338-
- Poddisruptionbudgets
339-
- Poddisruptionbudgets/status
338+
- poddisruptionbudgets
339+
- poddisruptionbudgets/status
340340

341341

342342
- verbs:

tutorials/deploy-istio-kapsule-proxy-protocol/index.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -338,5 +338,5 @@ Once you have added the configurations, you may need to restart the IngressGatew
338338
339339
Run the following command to delete the existing Pods using `kubectl`. Kubernetes will spin up new ones automatically after you launch the command:
340340
```bash
341-
kubectl delete Pod -n istio-system -l istio=ingressgateway
341+
kubectl delete pod -n istio-system -l istio=ingressgateway
342342
```

0 commit comments

Comments
 (0)