|
| 1 | +The Julia package Kuber.jl makes Kubernetes clusters easy to use and plug in to from Julia code. |
| 2 | + |
| 3 | +In this article, we shall launch a Kubernetes cluster on Azure and use it from Julia. Kuber.jl can also be used with Kubernetes clusters created by other mechanisms. In the command samples displayed in this article, those decorated with the `$` prefix indicate a command run in a shell, and those with the `julia>` prefix are run in the Julia REPL. |
| 4 | + |
| 5 | +## Kubernetes Cluster using Azure AKS |
| 6 | + |
| 7 | +[Azure Kubernetes Service (AKS)](https://docs.microsoft.com/en-us/azure/aks/intro-kubernetes) makes it simple to deploy a managed Kubernetes cluster in Azure. AKS reduces the complexity and operational overhead of managing Kubernetes by offloading much of that responsibility to Azure. As a hosted Kubernetes service, Azure handles critical tasks like health monitoring and maintenance for you. We use the [Azure CLI utility](https://docs.microsoft.com/en-us/cli/azure/?view=azure-cli-latest) from a unix shell to create the cluster. |
| 8 | + |
| 9 | +Start by deciding on the cluster name and region to launch it in. |
| 10 | + |
| 11 | +``` |
| 12 | +$ RESOURCE_GROUP=akstest |
| 13 | +$ LOCATION=eastus |
| 14 | +$ CLUSTERNAME="${RESOURCE_GROUP}cluster" |
| 15 | +``` |
| 16 | + |
| 17 | +Create a Resource Group to hold our AKS instance |
| 18 | +``` |
| 19 | +$ az group create --name $RESOURCE_GROUP --location $LOCATION |
| 20 | +{ |
| 21 | + "id": "/subscriptions/b509b56d-725d-4625-a5d3-ae68c72f15b9/resourceGroups/akstest", |
| 22 | + "location": "eastus", |
| 23 | + "managedBy": null, |
| 24 | + "name": "akstest", |
| 25 | + "properties": { |
| 26 | + "provisioningState": "Succeeded" |
| 27 | + }, |
| 28 | + "tags": null |
| 29 | +} |
| 30 | +``` |
| 31 | + |
| 32 | +Use the `az aks create` command to launch the cluster. We start only one node for this example, and use the defaults for most other options. |
| 33 | +``` |
| 34 | +$ az aks create --resource-group $RESOURCE_GROUP --name $CLUSTERNAME --node-count 1 --enable-addons monitoring --generate-ssh-keys |
| 35 | +{ |
| 36 | + ... |
| 37 | + "location": "eastus", |
| 38 | + "name": "akstestcluster", |
| 39 | + "networkProfile": { |
| 40 | + "dnsServiceIp": "10.0.0.10", |
| 41 | + "dockerBridgeCidr": "172.17.0.1/16", |
| 42 | + "networkPlugin": "kubenet", |
| 43 | + "networkPolicy": null, |
| 44 | + "podCidr": "10.244.0.0/16", |
| 45 | + "serviceCidr": "10.0.0.0/16" |
| 46 | + }, |
| 47 | + ... |
| 48 | +} |
| 49 | +``` |
| 50 | + |
| 51 | +To make it easy for us to connect to it from our local machine, fetch the cluster credentials and start a `kubectl proxy` process. |
| 52 | +``` |
| 53 | +$ az aks install-cli |
| 54 | +$ az aks get-credentials --resource-group $RESOURCE_GROUP --name $CLUSTERNAME |
| 55 | +$ kubectl proxy |
| 56 | +Starting to serve on 127.0.0.1:8001 |
| 57 | +``` |
| 58 | + |
| 59 | +## Connecting to the Kubernetes Cluster |
| 60 | + |
| 61 | +We shall now try and explore the cluster using Kuber.jl. First, add the package and load it on to the Julia REPL. |
| 62 | + |
| 63 | +``` |
| 64 | +julia> using Pkg |
| 65 | +
|
| 66 | +julia> Pkg.add("Kuber") |
| 67 | +
|
| 68 | +julia> using Kuber |
| 69 | +``` |
| 70 | + |
| 71 | +A cluster connection is represented by a `KuberContext` instance. A context encapsulates the TCP connection to the cluster, and connection options. By default, Kuber.jl attempts to connect to a local port opened through `kubectl proxy`. If you followed all the steps in the previous section to start the Azure AKS cluster, you would have already done that. |
| 72 | + |
| 73 | +``` |
| 74 | +julia> ctx = KuberContext() |
| 75 | +Kubernetes namespace default at http://localhost:8001 |
| 76 | +``` |
| 77 | + |
| 78 | +Kubernets has multiple API sets and revisions thereof. We need to set the appropriate API versions to interact with our server. The `set_api_versions!` call detects the API versions supported and preferred by the server. They are stored in the context object to be used in subsequent communitcation. |
| 79 | + |
| 80 | +``` |
| 81 | +julia> Kuber.set_api_versions!(ctx; verbose=true) |
| 82 | +┌ Info: Core versions |
| 83 | +│ supported = "v1" |
| 84 | +└ preferred = "v1" |
| 85 | +┌ Info: apiregistration.k8s.io (Apiregistration) versions |
| 86 | +│ supported = "v1beta1" |
| 87 | +└ preferred = "v1beta1" |
| 88 | +┌ Info: extensions (Extensions) versions |
| 89 | +│ supported = "v1beta1" |
| 90 | +└ preferred = "v1beta1" |
| 91 | +┌ Info: apps (Apps) versions |
| 92 | +│ supported = "v1, v1beta2, v1beta1" |
| 93 | +└ preferred = "v1" |
| 94 | +┌ Info: events.k8s.io (Events) versions |
| 95 | +│ supported = "v1beta1" |
| 96 | +└ preferred = "v1beta1" |
| 97 | +┌ Info: authentication.k8s.io (Authentication) versions |
| 98 | +│ supported = "v1, v1beta1" |
| 99 | +└ preferred = "v1" |
| 100 | +┌ Info: authorization.k8s.io (Authorization) versions |
| 101 | +│ supported = "v1, v1beta1" |
| 102 | +└ preferred = "v1" |
| 103 | +┌ Info: autoscaling (Autoscaling) versions |
| 104 | +│ supported = "v1, v2beta1" |
| 105 | +└ preferred = "v1" |
| 106 | +┌ Info: batch (Batch) versions |
| 107 | +│ supported = "v1, v1beta1" |
| 108 | +└ preferred = "v1" |
| 109 | +┌ Info: certificates.k8s.io (Certificates) versions |
| 110 | +│ supported = "v1beta1" |
| 111 | +└ preferred = "v1beta1" |
| 112 | +┌ Info: networking.k8s.io (Networking) versions |
| 113 | +│ supported = "v1" |
| 114 | +└ preferred = "v1" |
| 115 | +┌ Info: policy (Policy) versions |
| 116 | +│ supported = "v1beta1" |
| 117 | +└ preferred = "v1beta1" |
| 118 | +┌ Info: rbac.authorization.k8s.io (RbacAuthorization) versions |
| 119 | +│ supported = "v1, v1beta1" |
| 120 | +└ preferred = "v1" |
| 121 | +┌ Info: storage.k8s.io (Storage) versions |
| 122 | +│ supported = "v1, v1beta1" |
| 123 | +└ preferred = "v1" |
| 124 | +┌ Info: admissionregistration.k8s.io (Admissionregistration) versions |
| 125 | +│ supported = "v1beta1, v1alpha1" |
| 126 | +└ preferred = "v1beta1" |
| 127 | +┌ Info: apiextensions.k8s.io (Apiextensions) versions |
| 128 | +│ supported = "v1beta1" |
| 129 | +└ preferred = "v1beta1" |
| 130 | +``` |
| 131 | + |
| 132 | +Things in the cluster are identified by entities. Kubernetes `Pod`, `Job`, `Service` are all examples of entities. APIs are verbs that act on those entities. There are only a handful of APIs we need to remember in Kuber.jl to interact with all of them: |
| 133 | +- `get`: list or fetch entities |
| 134 | +- `put!`: create entities |
| 135 | +- `update!`: update existing entities |
| 136 | +- `delete!`: delete existing entities |
| 137 | + |
| 138 | +Now that we have set up our connection properly, let's explore what we have in our cluster. Kubernetes publishes the status of its components, and we can take a look at it to see if everything is fine. We can use Kuber.jl APIs for that. To do that we use the `get` API to fetch the `ComponentStatus` entity. |
| 139 | + |
| 140 | +``` |
| 141 | +julia> result = get(ctx, :ComponentStatus); |
| 142 | +
|
| 143 | +julia> typeof(result) |
| 144 | +Kuber.Kubernetes.IoK8sApiCoreV1ComponentStatusList |
| 145 | +``` |
| 146 | + |
| 147 | +Note that we got back a Julia type `Kuber.Kubernetes.IoK8sApiCoreV1ComponentStatusList`. It represents a list of `ComponentStatus` entities that we asked for. It has been resolved to match the specific to the version of API we used - `CoreV1` in this case. We can display the entity in JSON form in the REPL, by simply `show`ing it. |
| 148 | + |
| 149 | +``` |
| 150 | +julia> result |
| 151 | +{ |
| 152 | + "apiVersion": "v1", |
| 153 | + "items": [ |
| 154 | + { |
| 155 | + "conditions": [ |
| 156 | + { |
| 157 | + "message": "{\"health\": \"true\"}", |
| 158 | + "status": "True", |
| 159 | + "type": "Healthy" |
| 160 | + } |
| 161 | + ], |
| 162 | + "metadata": { |
| 163 | + "name": "etcd-0", |
| 164 | + "selfLink": "/api/v1/componentstatuses/etcd-0" |
| 165 | + } |
| 166 | + } |
| 167 | + ... |
| 168 | + ], |
| 169 | + "kind": "ComponentStatusList", |
| 170 | + "metadata": { |
| 171 | + "selfLink": "/api/v1/componentstatuses" |
| 172 | + } |
| 173 | +} |
| 174 | +``` |
| 175 | + |
| 176 | +Or we can access it like a regular Julia type and look at individual fields: |
| 177 | +``` |
| 178 | +julia> for item in result.items |
| 179 | + println(item.metadata.name, " ", item.conditions[1]._type, " => ", item.conditions[1].status) |
| 180 | + end |
| 181 | +controller-manager Healthy => False |
| 182 | +scheduler Healthy => False |
| 183 | +etcd-0 Healthy => True |
| 184 | +``` |
| 185 | + |
| 186 | +Notice that APIs that fetch a list, have the entities in a field named `item`, and entities have thier name in the `metadata.name` field. We can list the namespaces available in the cluster, and now we can do it succintly as: |
| 187 | + |
| 188 | +``` |
| 189 | +julia> collect(item.metadata.name for item in (get(ctx, :Namespace)).items) |
| 190 | +3-element Array{String,1}: |
| 191 | + "default" |
| 192 | + "kube-public" |
| 193 | + "kube-system" |
| 194 | +``` |
| 195 | + |
| 196 | +And similarly a list of pods: |
| 197 | + |
| 198 | +``` |
| 199 | +julia> collect(item.metadata.name for item in (get(ctx, :Pod)).items) |
| 200 | +0-element Array{Any,1} |
| 201 | +``` |
| 202 | + |
| 203 | +We do not have any pods in the default namespace yet, because we have not started any! But we must have some system pods running in the "kube-system" namespace. We can switch namespaces and look into the "kube-system" namespace: |
| 204 | + |
| 205 | +``` |
| 206 | +julia> set_ns(ctx, "kube-system") |
| 207 | +
|
| 208 | +julia> collect(item.metadata.name for item in (get(ctx, :Pod)).items) |
| 209 | +9-element Array{String,1}: |
| 210 | + "heapster-779db6bd48-pbclv" |
| 211 | + "kube-dns-v20-b8ff799f7-fjtw9" |
| 212 | + "kube-dns-v20-b8ff799f7-mhdkp" |
| 213 | + "kube-proxy-fmzbz" |
| 214 | + "kube-svc-redirect-lkxzn" |
| 215 | + "kubernetes-dashboard-7fbf669f58-7cf92" |
| 216 | + "omsagent-c6d7j" |
| 217 | + "omsagent-rs-7588f569b9-bs27t" |
| 218 | + "tunnelfront-c66db54d9-hj2jm" |
| 219 | +``` |
| 220 | + |
| 221 | +There! Now let's get back to the default namespace and start something of our own. How about a nginx webserver that we can access over the internet? Kubernetes entities can be created from their JSON specification with the `kuber_obj` utility API provided with Kuber.jl. |
| 222 | + |
| 223 | +``` |
| 224 | +julia> nginx_pod = kuber_obj(ctx, """{ |
| 225 | + "kind": "Pod", |
| 226 | + "metadata":{ |
| 227 | + "name": "nginx-pod", |
| 228 | + "namespace": "default", |
| 229 | + "labels": { |
| 230 | + "name": "nginx-pod" |
| 231 | + } |
| 232 | + }, |
| 233 | + "spec": { |
| 234 | + "containers": [{ |
| 235 | + "name": "nginx", |
| 236 | + "image": "nginx", |
| 237 | + "ports": [{"containerPort": 80}] |
| 238 | + }] |
| 239 | + } |
| 240 | + }"""); |
| 241 | +
|
| 242 | +julia> typeof(nginx_pod) |
| 243 | +Kuber.Kubernetes.IoK8sApiCoreV1Pod |
| 244 | +
|
| 245 | +julia> nginx_service = kuber_obj(ctx, """{ |
| 246 | + "kind": "Service", |
| 247 | + "metadata": { |
| 248 | + "name": "nginx-service", |
| 249 | + "namespace": "default", |
| 250 | + "labels": {"name": "nginx-service"} |
| 251 | + }, |
| 252 | + "spec": { |
| 253 | + "type": "LoadBalancer", |
| 254 | + "ports": [{"port": 80}], |
| 255 | + "selector": {"name": "nginx-pod"} |
| 256 | + } |
| 257 | + }""") |
| 258 | +
|
| 259 | +julia> typeof(nginx_service) |
| 260 | +Kuber.Kubernetes.IoK8sApiCoreV1Service |
| 261 | +``` |
| 262 | + |
| 263 | +To create the pod in the cluster, use the `put!` API. And we should see it when we list the pods. |
| 264 | + |
| 265 | +``` |
| 266 | +julia> result = put!(ctx, nginx_pod); |
| 267 | +
|
| 268 | +julia> collect(item.metadata.name for item in get(ctx, :Pod).items) |
| 269 | +1-element Array{String,1}: |
| 270 | + "nginx-pod" |
| 271 | +``` |
| 272 | + |
| 273 | +We create the service, with an external LoadBalancer, to be able to access it from our browser: |
| 274 | + |
| 275 | +``` |
| 276 | +julia> result = put!(ctx, nginx_service) |
| 277 | +{ |
| 278 | + "apiVersion": "v1", |
| 279 | + "kind": "Service", |
| 280 | + "metadata": { |
| 281 | + "creationTimestamp": "2018-12-07T06:24:26Z", |
| 282 | + "labels": { |
| 283 | + "name": "nginx-service" |
| 284 | + }, |
| 285 | + "name": "nginx-service", |
| 286 | + "namespace": "default", |
| 287 | + "resourceVersion": "3172", |
| 288 | + "selfLink": "/api/v1/namespaces/default/services/nginx-service", |
| 289 | + "uid": "bf289d78-f9e8-11e8-abb2-cad68e0bf188" |
| 290 | + }, |
| 291 | + "spec": { |
| 292 | + "clusterIP": "10.0.191.35", |
| 293 | + "externalTrafficPolicy": "Cluster", |
| 294 | + "ports": [ |
| 295 | + { |
| 296 | + "nodePort": 32527, |
| 297 | + "port": 80, |
| 298 | + "protocol": "TCP", |
| 299 | + "targetPort": "80" |
| 300 | + } |
| 301 | + ], |
| 302 | + "selector": { |
| 303 | + "name": "nginx-pod" |
| 304 | + }, |
| 305 | + "sessionAffinity": "None", |
| 306 | + "type": "LoadBalancer" |
| 307 | + }, |
| 308 | + "status": { |
| 309 | + "loadBalancer": {} |
| 310 | + } |
| 311 | +} |
| 312 | +``` |
| 313 | + |
| 314 | +Note that the `loadBalancer` status field is empty. It takes a while to hook up a load balancer to our service. We need to wait for it to be able to access our webserver! |
| 315 | + |
| 316 | +``` |
| 317 | +julia> while true |
| 318 | + println("waiting for loadbalancer to be configured...") |
| 319 | + sleep(30) |
| 320 | + status = get(ctx, :Service, "nginx-service").status |
| 321 | + if nothing !== status.loadBalancer.ingress && !isempty(status.loadBalancer.ingress) |
| 322 | + println(status.loadBalancer.ingress[1].ip) |
| 323 | + return |
| 324 | + end |
| 325 | + end |
| 326 | +waiting for loadbalancer to be configured... |
| 327 | +waiting for loadbalancer to be configured... |
| 328 | +waiting for loadbalancer to be configured... |
| 329 | +40.121.19.163 |
| 330 | +``` |
| 331 | + |
| 332 | +Our web server is up! And we can fetch a page from it. |
| 333 | + |
| 334 | +``` |
| 335 | +shell> curl http://40.121.19.163/ |
| 336 | +<!DOCTYPE html> |
| 337 | +<html> |
| 338 | +<head> |
| 339 | +<title>Welcome to nginx!</title> |
| 340 | +<style> |
| 341 | + body { |
| 342 | + width: 35em; |
| 343 | + margin: 0 auto; |
| 344 | + font-family: Tahoma, Verdana, Arial, sans-serif; |
| 345 | + } |
| 346 | +</style> |
| 347 | +</head> |
| 348 | +<body> |
| 349 | +<h1>Welcome to nginx!</h1> |
| 350 | +<p>If you see this page, the nginx web server is successfully installed and |
| 351 | +working. Further configuration is required.</p> |
| 352 | +
|
| 353 | +<p>For online documentation and support please refer to |
| 354 | +<a href="http://nginx.org/">nginx.org</a>.<br/> |
| 355 | +Commercial support is available at |
| 356 | +<a href="http://nginx.com/">nginx.com</a>.</p> |
| 357 | +
|
| 358 | +<p><em>Thank you for using nginx.</em></p> |
| 359 | +</body> |
| 360 | +</html> |
| 361 | +``` |
| 362 | + |
| 363 | +## Cleaning up |
| 364 | + |
| 365 | +Once we are done, we can delete the entities we created in the cluster with the `delete!` API. |
| 366 | + |
| 367 | +``` |
| 368 | +julia> delete!(ctx, :Service, "nginx-service"); |
| 369 | +julia> delete!(ctx, :Pod, "nginx-pod"); |
| 370 | +``` |
| 371 | + |
| 372 | +To delete the Kubernetes cluster, we can delete the resource group itself, which would also terminate the cluster created under it. |
| 373 | + |
| 374 | +``` |
| 375 | +$ az group delete --name $RESOURCE_GROUP --yes --no-wait |
| 376 | +``` |
0 commit comments