|
| 1 | +--- |
| 2 | +title: MKS Premium Plan |
| 3 | +excerpt: 'Features and limitations of the MKS Premium Plan in Beta version' |
| 4 | +updated: 2025-04-30 |
| 5 | +--- |
| 6 | + |
| 7 | +<style> |
| 8 | + pre { |
| 9 | + font-size: 14px; |
| 10 | + } |
| 11 | + pre.console { |
| 12 | + background-color: #300A24; |
| 13 | + color: #ccc; |
| 14 | + font-family: monospace; |
| 15 | + padding: 5px; |
| 16 | + margin-bottom: 5px; |
| 17 | + } |
| 18 | + pre.console code { |
| 19 | + b font-family: monospace !important; |
| 20 | + font-size: 0.75em; |
| 21 | + color: #ccc; |
| 22 | + } |
| 23 | + .small { |
| 24 | + font-size: 0.75em; |
| 25 | + } |
| 26 | +</style> |
| 27 | + |
| 28 | +> [!primary] |
| 29 | +> This document describes the features and "how-to" for the Managed Kubernetes Service Premium Plan currently in beta version. For additional details on the Managed Kubernetes Service Standard plan, refer to the [following documentation](/pages/public_cloud/containers_orchestration/managed_kubernetes/known-limits). |
| 30 | +
|
| 31 | +## Standard vs Premium comparison |
| 32 | + |
| 33 | +| Plan | Standard | Premium | |
| 34 | +| --------------------- | --------------------------------------------------- | ----------------------------------------- | |
| 35 | +| ControlPlane | Managed | Managed & Cross-AZ resilient | |
| 36 | +| Availability | 99,5% SLO | 99,99 SLA (at General Availability stage) | |
| 37 | +| etcd | Shared, up to 400MB | Dedicated, up to 8GB | |
| 38 | +| Max cluster size | Up to 100 nodes | Up to 500 nodes | |
| 39 | +| Regional availability | Single-zone regions (3-AZ regions planned for 2025) | 3-AZ region for now | |
| 40 | + |
| 41 | +## Limitations / Upcoming features |
| 42 | + |
| 43 | +In order to help you make the best use of our new Managed Kubernetes Service (MKS) Premium Plan, we have listed some limitations and guidelines related to specific features. |
| 44 | + |
| 45 | +This list is subject to change as new features will be introduced during the Beta period. The end of the Beta phase and General Availability are planned for the end of summer 2025. |
| 46 | + |
| 47 | +### Cluster upgrade |
| 48 | + |
| 49 | +Upgrading an existing cluster is not supported at the moment, we'll deliver this functionality once we support the next Kubernetes release (1.33). |
| 50 | + |
| 51 | +### Cluster rename |
| 52 | + |
| 53 | +Renaming an existing cluster is not supported at the moment. |
| 54 | + |
| 55 | +### Logs Data Platform integration |
| 56 | + |
| 57 | +Audit logs forwarding to the [Logs Data Platform](/pages/public_cloud/containers_orchestration/managed_kubernetes/forwarding-audit-logs-to-logs-data-platform) is not supported at the moment. |
| 58 | + |
| 59 | +### ETCD Quota |
| 60 | + |
| 61 | +Real-time monitoring of the etcd storage usage is not supported at the moment, current etcd quota is 8GB per cluster. |
| 62 | + |
| 63 | +### API server admission plugins configuration |
| 64 | + |
| 65 | +The configuration of the [API server admission plugins](/pages/public_cloud/containers_orchestration/managed_kubernetes/apiserver-flags-configuration) is not available at the moment. |
| 66 | + |
| 67 | +### API Server IP restrictions |
| 68 | + |
| 69 | +To enable IP filtering on the API server, the IP of the gateway in the cluster's OpenStack subnet should be specified. |
| 70 | +This allows worker nodes to communicate with the API server. |
| 71 | + |
| 72 | +Retrieve the gateway IP of your cluster's gateway in the [OVHcloud Control Panel](/links/manager), or by using the following command: |
| 73 | + |
| 74 | +```bash |
| 75 | +openstack router show ROUTER_ID -c external_gateway_info |
| 76 | +``` |
| 77 | + |
| 78 | +### Security Policies |
| 79 | + |
| 80 | +Changing the Security Policy after the cluster creation is not supported yet. |
| 81 | + |
| 82 | +### Anti-affinity |
| 83 | + |
| 84 | +This feature allows worker nodes to be deployed on different hypervisors (physical servers) within the same availability zone, guaranteeing better fault tolerance. It is currently supported on the MKS Premium Plan (region EU-WEST-PAR). |
| 85 | + |
| 86 | +We recommend using multiple Availability Zones (AZs) instead by using node pool to spread worker nodes between AZ. |
| 87 | + |
| 88 | +### Ports |
| 89 | + |
| 90 | +The OpenStack security group for worker nodes is the `default` one. It allows all egress and ingress traffic by default on your private network. |
| 91 | + |
| 92 | +```bash |
| 93 | +openstack security group rule list default |
| 94 | ++--------------------------------------+-------------+-----------+-----------+------------+-----------+-----------------------+----------------------+ |
| 95 | +| ID | IP Protocol | Ethertype | IP Range | Port Range | Direction | Remote Security Group | Remote Address Group | |
| 96 | ++--------------------------------------+-------------+-----------+-----------+------------+-----------+-----------------------+----------------------+ |
| 97 | +| 0b31c652-b463-4be2-b7e9-9ebb25d619f8 | None | IPv4 | 0.0.0.0/0 | | egress | None | None | |
| 98 | +| 25628717-0339-4caa-bd23-b07376383dba | None | IPv6 | ::/0 | | ingress | None | None | |
| 99 | +| 4b0b0ed2-ed16-4834-a5be-828906ce4f06 | None | IPv4 | 0.0.0.0/0 | | ingress | None | None | |
| 100 | +| 9ac372e3-6a9f-4015-83df-998eec33b790 | None | IPv6 | ::/0 | | egress | None | None | |
| 101 | ++--------------------------------------+-------------+-----------+-----------+------------+-----------+-----------------------+----------------------+ |
| 102 | +``` |
| 103 | + |
| 104 | +For now it is recommended to leave these security rules in their "default" configuration or the nodes could be disconnected from the cluster. |
| 105 | + |
| 106 | +### Reserved IP ranges |
| 107 | + |
| 108 | +The following ranges are used by the cluster, and should not be used elsewhere on the private network attached to the cluster. |
| 109 | + |
| 110 | +```text |
| 111 | +10.240.0.0/13 # Subnet used by pods |
| 112 | +10.3.0.0/16 # Subnet used by services |
| 113 | +``` |
| 114 | + |
| 115 | +These ranges will be configurable in a future version. |
| 116 | + |
| 117 | +## Getting started |
| 118 | + |
| 119 | +### Prerequisites |
| 120 | + |
| 121 | +To create an MKS Premium cluster, a private network and subnet with an attached [OVHcloud Gateway](/links/public-cloud/gateway) (an OpenStack router) is mandatory. Before starting the cluster creation process, please make sure that you have an existing subnet that meets these requirements or create a new one accordingly. |
| 122 | + |
| 123 | +If you want to use an use an existing subnet: |
| 124 | + |
| 125 | +- **If the Subnet's GatewayIP is already used by an OVHcloud Gateway**, nothing needs to be done. The current OVHcloud Gateway (OpenStack Router) will be used. |
| 126 | +- **If the subnet does not have an IP reserved for a Gateway**, you will have to provide or create a compatible subnet. Two options are available: |
| 127 | + - Edit an existing subnet to reserve an IP for a Gateway: please refer to the [Update a subnet properties](/pages/public_cloud/public_cloud_network_services/configuration-04-update_subnet) documentation, then create a gateway ([Creating a private network with Gateway](/links/public-cloud/gateway)) |
| 128 | + - Provide another compatible subnet: a subnet with an existing OVHcloud Gateway ([Creating a private network with Gateway](/links/public-cloud/gateway)) |
| 129 | +- **If the GatewayIP is already assigned to a non-OVHcloud Gateway (OpenStack Router)**. |
| 130 | + - Provide another compatible subnet: a subnet with an existing OVHcloud Gateway ([Creating a private network with Gateway](/links/public-cloud/gateway)) |
| 131 | + |
| 132 | +> [!primary] |
| 133 | +> Please remember to avoid the MKS Reserved IP ranges (cf above) for your networkd CIDR |
| 134 | +
|
| 135 | +> [!primary] |
| 136 | +> Using the OVHcloud Control Panel, make sure to check the `Declare the first address of a CIDR given as the default gateway (DHCP option 3)` and `Assign a Gateway and connect to the private network` boxes at network creation. |
| 137 | +> |
| 138 | +> {.thumbnail} |
| 139 | +> |
| 140 | +
|
| 141 | +### Create a MKS Premium cluster |
| 142 | + |
| 143 | +The following methods are supported to create an MKS Premium cluster: |
| 144 | + |
| 145 | +> [!tabs] |
| 146 | +> Using the OVHcloud Control Panel |
| 147 | +>> |
| 148 | +>> Log in to the [OVHcloud Control Panel](/links/manager), go to `Public Cloud`{.action} and select the Public Cloud project where you want to deploy the cluster. |
| 149 | +>> |
| 150 | +>> Access the OVHcloud Managed Kubernetes Service by clicking on `Managed Kubernetes Service`{.action} under Containers & Orchestration in the left-hand menu and click on `Create a cluster`{.action}. |
| 151 | +>> |
| 152 | +>> {.thumbnail} |
| 153 | +>> |
| 154 | +>> Enter a name for your cluster. |
| 155 | +>> |
| 156 | +>> {.thumbnail} |
| 157 | +>> |
| 158 | +>> Select '3-AZ Region' as deployment mode. |
| 159 | +>> |
| 160 | +>> {.thumbnail} |
| 161 | +>> |
| 162 | +>> Select 'Paris (EU-WEST-PAR)' as location. |
| 163 | +>> |
| 164 | +>> {.thumbnail} |
| 165 | +>> |
| 166 | +>> Select the `Premium`{.action} plan and click `Next`{.action}. |
| 167 | +>> |
| 168 | +>> {.thumbnail} |
| 169 | +>> |
| 170 | +>> Choose the minor version of Kubernetes and the Security Policy. |
| 171 | +>> |
| 172 | +>> > [!primary] |
| 173 | +>> > We recommend to always use the lastest stable version. |
| 174 | +>> > Please read our [End of life / end of support](/pages/public_cloud/containers_orchestration/managed_kubernetes/eos-eol-policies) page to understand our version policy. |
| 175 | +>> |
| 176 | +>> {.thumbnail} |
| 177 | +>> |
| 178 | +>> Select a private network for your cluster. |
| 179 | +>> |
| 180 | +>> {.thumbnail} |
| 181 | +>> |
| 182 | +>> Select a private subnet for your cluster. |
| 183 | +>> |
| 184 | +>> {.thumbnail} |
| 185 | +>> |
| 186 | +>> (Optional) Now you can configure your nodepools. A node pool is a group of nodes sharing the same configuration, allowing you a lot of flexibility in your cluster management. Enter a name and select the instance flavor. |
| 187 | +>> |
| 188 | +>> {.thumbnail} |
| 189 | +>> |
| 190 | +>> Select the Availability Zone for your node pool. |
| 191 | +>> |
| 192 | +>> {.thumbnail} |
| 193 | +>> |
| 194 | +>> Define the size of your first node pool. |
| 195 | +>> |
| 196 | +>> You can enable the `Autoscaling`{.action} feature for the cluster. Define the minimum and maximum pool size in that case. |
| 197 | +>> |
| 198 | +>> {.thumbnail} |
| 199 | +>> |
| 200 | +>> Click `Add node pool`{.action}. |
| 201 | +>> |
| 202 | +>> {.thumbnail} |
| 203 | +>> |
| 204 | +>> If you want to create a nodepool on each Availability Zone you can repeat this operation by clicking the `Add node pool`{.action} button again and changing the AZ parameter. |
| 205 | +>> |
| 206 | +>> Finally, click the `Confirm cluster`{.action} button. |
| 207 | +>> |
| 208 | +>> {.thumbnail} |
| 209 | +>> |
| 210 | +>> The cluster creation is now in progress. It should be available within a few minutes in your OVHcloud Control Panel. |
| 211 | +>> |
| 212 | +> Using Terraform |
| 213 | +>> |
| 214 | +>> Refer to the [dedicated documentation](/pages/public_cloud/containers_orchestration/managed_kubernetes/creating-a-cluster-through-terraform) to create a Managed Kubernetes cluster. |
| 215 | +>> |
| 216 | +>> Here is a sample Terraform file that creates an MKS Premium cluster and three nodepools on three different availability zones in the `EU-WEST-PAR` region. |
| 217 | +>> |
| 218 | +>> ```bash |
| 219 | +>> terraform { |
| 220 | +>> required_providers { |
| 221 | +>> ovh = { |
| 222 | +>> source = "ovh/ovh" |
| 223 | +>> } |
| 224 | +>> } |
| 225 | +>> } |
| 226 | +>> |
| 227 | +>> provider "ovh" { |
| 228 | +>> endpoint = "ovh-eu" |
| 229 | +>> application_key = "<your_access_key>" |
| 230 | +>> application_secret = "<your_application_secret>" |
| 231 | +>> consumer_key = "<your_consumer_key>" |
| 232 | +>> } |
| 233 | +>> |
| 234 | +>> resource "ovh_cloud_project_kube" "my_kube_cluster" { |
| 235 | +>> service_name = var.service_name |
| 236 | +>> name = "lgr-terraform-test-3az" |
| 237 | +>> region = "EU-WEST-PAR" |
| 238 | +>> version = "1.31" |
| 239 | +>> private_network_id = "<OpenStack Network Id>" |
| 240 | +>> nodes_subnet_id = "<Openstack Subnet Id>" |
| 241 | +>> } |
| 242 | +>> resource "ovh_cloud_project_kube_nodepool" "node_pool" { |
| 243 | +>> service_name = var.service_name |
| 244 | +>> kube_id = ovh_cloud_project_kube.my_kube_cluster.id |
| 245 | +>> name = "my-pool-a-1" |
| 246 | +>> flavor_name = "b3-8" |
| 247 | +>> availability_zones = ["eu-west-par-a"] |
| 248 | +>> desired_nodes = 1 |
| 249 | +>> } |
| 250 | +>> resource "ovh_cloud_project_kube_nodepool" "node_pool_b" { |
| 251 | +>> service_name = var.service_name |
| 252 | +>> kube_id = ovh_cloud_project_kube.my_kube_cluster.id |
| 253 | +>> name = "my-pool-b-1" |
| 254 | +>> flavor_name = "b3-8" |
| 255 | +>> availability_zones = ["eu-west-par-b"] |
| 256 | +>> desired_nodes = 1 |
| 257 | +>> } |
| 258 | +>> resource "ovh_cloud_project_kube_nodepool" "node_pool_c" { |
| 259 | +>> service_name = var.service_name |
| 260 | +>> kube_id = ovh_cloud_project_kube.my_kube_cluster.id |
| 261 | +>> name = "my-pool-c-1" |
| 262 | +>> flavor_name = "b3-8" |
| 263 | +>> availability_zones = ["eu-west-par-c"] |
| 264 | +>> desired_nodes = 1 |
| 265 | +>> } |
| 266 | +>> output "kubeconfig_file" { |
| 267 | +>> value = ovh_cloud_project_kube.my_kube_cluster.kubeconfig |
| 268 | +>> sensitive = true |
| 269 | +>> } |
| 270 | +>> ``` |
| 271 | +
|
| 272 | +## Go further |
| 273 | +
|
| 274 | +- If you need training or technical assistance to implement our solutions, contact your sales representative or click on [this link](/links/professional-services) to get a quote and ask our Professional Services experts for assisting you on your specific use case of your project. |
| 275 | +
|
| 276 | +- Join our [community of users on Discord](https://discord.gg/ovhcloud)! |
0 commit comments