|
1 | 1 | 🎲🎲🎲 EXIT CODE: 0 🎲🎲🎲 |
2 | 2 | 🟥🟥🟥 STDERR️️ 🟥🟥🟥️ |
3 | | -Create a new Kubernetes cluster on a Scaleway account. |
| 3 | +Create a new Kubernetes cluster in a Scaleway region. |
4 | 4 |
|
5 | 5 | USAGE: |
6 | 6 | scw k8s cluster create [arg=value ...] |
7 | 7 |
|
8 | 8 | EXAMPLES: |
9 | | - Create a Kubernetes cluster named foo with cilium as CNI, in version 1.24.7 and with a pool named default composed of 3 DEV1-M |
10 | | - scw k8s cluster create name=foo version=1.24.7 pools.0.size=3 pools.0.node-type=DEV1-M pools.0.name=default |
| 9 | + Create a Kubernetes cluster named foo with cilium as CNI, in version 1.27.0 and with a pool named default composed of 3 DEV1-M |
| 10 | + scw k8s cluster create name=foo version=1.27.0 pools.0.size=3 pools.0.node-type=DEV1-M pools.0.name=default |
11 | 11 |
|
12 | | - Create a Kubernetes cluster named bar, tagged, calico as CNI, in version 1.24.7 and with a tagged pool named default composed of 2 RENDER-S and autohealing and autoscaling enabled (between 1 and 10 nodes) |
13 | | - scw k8s cluster create name=bar version=1.24.7 tags.0=tag1 tags.1=tag2 cni=calico pools.0.size=2 pools.0.node-type=RENDER-S pools.0.min-size=1 pools.0.max-size=10 pools.0.autohealing=true pools.0.autoscaling=true pools.0.tags.0=pooltag1 pools.0.tags.1=pooltag2 pools.0.name=default |
| 12 | + Create a Kubernetes cluster named bar, tagged, calico as CNI, in version 1.27.0 and with a tagged pool named default composed of 2 RENDER-S and autohealing and autoscaling enabled (between 1 and 10 nodes) |
| 13 | + scw k8s cluster create name=bar version=1.27.0 tags.0=tag1 tags.1=tag2 cni=calico pools.0.size=2 pools.0.node-type=RENDER-S pools.0.min-size=1 pools.0.max-size=10 pools.0.autohealing=true pools.0.autoscaling=true pools.0.tags.0=pooltag1 pools.0.tags.1=pooltag2 pools.0.name=default |
14 | 14 |
|
15 | 15 | ARGS: |
16 | 16 | [project-id] Project ID to use. If none is passed the default project ID will be used |
17 | | - [type] Type of the cluster (possible values are kapsule, multicloud). |
18 | | - name=<generated> Name of the cluster |
19 | | - [description] Description of the cluster |
| 17 | + [type] Type of the cluster (possible values are kapsule, multicloud, kapsule-dedicated-8, kapsule-dedicated-16) |
| 18 | + name=<generated> Cluster name |
| 19 | + [description] Cluster description |
20 | 20 | [tags.{index}] Tags associated with the cluster |
21 | 21 | version=latest Kubernetes version of the cluster |
22 | | - cni=cilium Container Network Interface (CNI) plugin that will run in the cluster (unknown_cni | cilium | calico | weave | flannel | kilo) |
| 22 | + cni=cilium Container Network Interface (CNI) plugin running in the cluster (unknown_cni | cilium | calico | weave | flannel | kilo) |
23 | 23 | pools.{index}.name Name of the pool |
24 | | - pools.{index}.node-type Node type is the type of Scaleway Instance wanted for the pool |
| 24 | + pools.{index}.node-type Node type is the type of Scaleway Instance wanted for the pool. Nodes with insufficient memory are not eligible (DEV1-S, PLAY2-PICO, STARDUST). 'external' is a special node type used to provision instances from other cloud providers in a Kosmos Cluster |
25 | 25 | [pools.{index}.placement-group-id] Placement group ID in which all the nodes of the pool will be created |
26 | 26 | [pools.{index}.autoscaling] Defines whether the autoscaling feature is enabled for the pool |
27 | 27 | pools.{index}.size Size (number of nodes) of the pool |
28 | | - [pools.{index}.min-size] Minimum size of the pool |
29 | | - [pools.{index}.max-size] Maximum size of the pool |
30 | | - [pools.{index}.container-runtime] Container runtime for the nodes of the pool (unknown_runtime | docker | containerd | crio) |
| 28 | + [pools.{index}.min-size] Defines the minimum size of the pool. Note that this field is only used when autoscaling is enabled on the pool |
| 29 | + [pools.{index}.max-size] Defines the maximum size of the pool. Note that this field is only used when autoscaling is enabled on the pool |
| 30 | + [pools.{index}.container-runtime] Customization of the container runtime is available for each pool. Note that `docker` has been deprecated since version 1.20 and will be removed by version 1.24 (unknown_runtime | docker | containerd | crio) |
31 | 31 | [pools.{index}.autohealing] Defines whether the autohealing feature is enabled for the pool |
32 | 32 | [pools.{index}.tags.{index}] Tags associated with the pool |
33 | | - [pools.{index}.kubelet-args.{key}] Kubelet arguments to be used by this pool. Note that this feature is to be considered as experimental |
| 33 | + [pools.{index}.kubelet-args.{key}] Kubelet arguments to be used by this pool. Note that this feature is experimental |
34 | 34 | [pools.{index}.upgrade-policy.max-unavailable] The maximum number of nodes that can be not ready at the same time |
35 | 35 | [pools.{index}.upgrade-policy.max-surge] The maximum number of nodes to be created during the upgrade |
36 | 36 | [pools.{index}.zone] Zone in which the pool's nodes will be spawned |
37 | | - [pools.{index}.root-volume-type] System volume disk type (default_volume_type | l_ssd | b_ssd) |
| 37 | + [pools.{index}.root-volume-type] Defines the system volume disk type. Two different types of volume (`volume_type`) are provided: `l_ssd` is a local block storage which means your system is stored locally on your node's hypervisor. `b_ssd` is a remote block storage which means your system is stored on a centralized and resilient cluster (default_volume_type | l_ssd | b_ssd) |
38 | 38 | [pools.{index}.root-volume-size] System volume disk size |
39 | 39 | [autoscaler-config.scale-down-disabled] Disable the cluster autoscaler |
40 | 40 | [autoscaler-config.scale-down-delay-after-add] How long after scale up that scale down evaluation resumes |
41 | 41 | [autoscaler-config.estimator] Type of resource estimator to be used in scale up (unknown_estimator | binpacking) |
42 | 42 | [autoscaler-config.expander] Type of node group expander to be used in scale up (unknown_expander | random | most_pods | least_waste | priority | price) |
43 | 43 | [autoscaler-config.ignore-daemonsets-utilization] Ignore DaemonSet pods when calculating resource utilization for scaling down |
44 | 44 | [autoscaler-config.balance-similar-node-groups] Detect similar node groups and balance the number of nodes between them |
45 | | - [autoscaler-config.expendable-pods-priority-cutoff] Pods with priority below cutoff will be expendable |
46 | | - [autoscaler-config.scale-down-unneeded-time] How long a node should be unneeded before it is eligible for scale down |
47 | | - [autoscaler-config.scale-down-utilization-threshold] Node utilization level, defined as sum of requested resources divided by capacity, below which a node can be considered for scale down |
| 45 | + [autoscaler-config.expendable-pods-priority-cutoff] Pods with priority below cutoff will be expendable. They can be killed without any consideration during scale down and they won't cause scale up. Pods with null priority (PodPriority disabled) are non expendable |
| 46 | + [autoscaler-config.scale-down-unneeded-time] How long a node should be unneeded before it is eligible to be scaled down |
| 47 | + [autoscaler-config.scale-down-utilization-threshold] Node utilization level, defined as a sum of requested resources divided by capacity, below which a node can be considered for scale down |
48 | 48 | [autoscaler-config.max-graceful-termination-sec] Maximum number of seconds the cluster autoscaler waits for pod termination when trying to scale down a node |
49 | | - [auto-upgrade.enable] Whether or not auto upgrade is enabled for the cluster |
| 49 | + [auto-upgrade.enable] Defines whether auto upgrade is enabled for the cluster |
50 | 50 | [auto-upgrade.maintenance-window.start-hour] Start time of the two-hour maintenance window |
51 | 51 | [auto-upgrade.maintenance-window.day] Day of the week for the maintenance window (any | monday | tuesday | wednesday | thursday | friday | saturday | sunday) |
52 | 52 | [feature-gates.{index}] List of feature gates to enable |
53 | 53 | [admission-plugins.{index}] List of admission plugins to enable |
54 | | - [open-id-connect-config.issuer-url] URL of the provider which allows the API server to discover public signing keys |
55 | | - [open-id-connect-config.client-id] A client id that all tokens must be issued for |
56 | | - [open-id-connect-config.username-claim] JWT claim to use as the user name |
57 | | - [open-id-connect-config.username-prefix] Prefix prepended to username |
| 54 | + [open-id-connect-config.issuer-url] URL of the provider which allows the API server to discover public signing keys. Only URLs using the `https://` scheme are accepted. This is typically the provider's discovery URL without a path, for example "https://accounts.google.com" or "https://login.salesforce.com" |
| 55 | + [open-id-connect-config.client-id] A client ID that all tokens must be issued for |
| 56 | + [open-id-connect-config.username-claim] JWT claim to use as the user name. The default is `sub`, which is expected to be the end user's unique identifier. Admins can choose other claims, such as `email` or `name`, depending on their provider. However, claims other than `email` will be prefixed with the issuer URL to prevent name collision |
| 57 | + [open-id-connect-config.username-prefix] Prefix prepended to username claims to prevent name collision (such as `system:` users). For example, the value `oidc:` will create usernames like `oidc:jane.doe`. If this flag is not provided and `username_claim` is a value other than `email`, the prefix defaults to `( Issuer URL )#` where `( Issuer URL )` is the value of `issuer_url`. The value `-` can be used to disable all prefixing |
58 | 58 | [open-id-connect-config.groups-claim.{index}] JWT claim to use as the user's group |
59 | | - [open-id-connect-config.groups-prefix] Prefix prepended to group claims |
60 | | - [open-id-connect-config.required-claim.{index}] Multiple key=value pairs that describes a required claim in the ID token |
| 59 | + [open-id-connect-config.groups-prefix] Prefix prepended to group claims to prevent name collision (such as `system:` groups). For example, the value `oidc:` will create group names like `oidc:engineering` and `oidc:infra` |
| 60 | + [open-id-connect-config.required-claim.{index}] Multiple key=value pairs describing a required claim in the ID token. If set, the claims are verified to be present in the ID token with a matching value |
61 | 61 | [apiserver-cert-sans.{index}] Additional Subject Alternative Names for the Kubernetes API server certificate |
62 | 62 | [private-network-id] Private network ID for internal cluster communication (cannot be changed later) |
63 | 63 | [organization-id] Organization ID to use. If none is passed the default organization ID will be used |
64 | 64 | [region=fr-par] Region to target. If none is passed will use default region from the config (fr-par | nl-ams | pl-waw) |
65 | 65 |
|
66 | 66 | DEPRECATED ARGS: |
67 | | - [enable-dashboard] Defines if the Kubernetes Dashboard is enabled in the cluster |
68 | | - [ingress] Ingress Controller that will run in the cluster (unknown_ingress | none | nginx | traefik | traefik2) |
| 67 | + [enable-dashboard] Defines whether the Kubernetes Dashboard is enabled in the cluster |
| 68 | + [ingress] Ingress Controller running in the cluster (deprecated feature) (unknown_ingress | none | nginx | traefik | traefik2) |
69 | 69 |
|
70 | 70 | FLAGS: |
71 | 71 | -h, --help help for create |
|
0 commit comments