You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -30,7 +30,7 @@ $ az group create --name $RESOURCE_GROUP --location $LOCATION
30
30
```
31
31
32
32
Use the `az aks create` command to launch the cluster. We start only one node for this example, and use the defaults for most other options.
33
-
```
33
+
```bash
34
34
$ az aks create --resource-group $RESOURCE_GROUP --name $CLUSTERNAME --node-count 1 --enable-addons monitoring --generate-ssh-keys
35
35
{
36
36
...
@@ -49,7 +49,7 @@ $ az aks create --resource-group $RESOURCE_GROUP --name $CLUSTERNAME --node-coun
49
49
```
50
50
51
51
To make it easy for us to connect to it from our local machine, fetch the cluster credentials and start a `kubectl proxy` process.
52
-
```
52
+
```bash
53
53
$ az aks install-cli
54
54
$ az aks get-credentials --resource-group $RESOURCE_GROUP --name $CLUSTERNAME
55
55
$ kubectl proxy
@@ -60,7 +60,7 @@ Starting to serve on 127.0.0.1:8001
60
60
61
61
We shall now try and explore the cluster using Kuber.jl. First, add the package and load it on to the Julia REPL.
62
62
63
-
```
63
+
```julia
64
64
julia>using Pkg
65
65
66
66
julia> Pkg.add("Kuber")
@@ -70,14 +70,14 @@ julia> using Kuber
70
70
71
71
A cluster connection is represented by a `KuberContext` instance. A context encapsulates the TCP connection to the cluster, and connection options. By default, Kuber.jl attempts to connect to a local port opened through `kubectl proxy`. If you followed all the steps in the previous section to start the Azure AKS cluster, you would have already done that.
72
72
73
-
```
73
+
```julia
74
74
julia> ctx =KuberContext()
75
75
Kubernetes namespace default at http://localhost:8001
76
76
```
77
77
78
78
Kubernets has multiple API sets and revisions thereof. We need to set the appropriate API versions to interact with our server. The `set_api_versions!` call detects the API versions supported and preferred by the server. They are stored in the context object to be used in subsequent communitcation.
79
79
80
-
```
80
+
```julia
81
81
julia> Kuber.set_api_versions!(ctx; verbose=true)
82
82
┌ Info: Core versions
83
83
│ supported ="v1"
@@ -137,7 +137,7 @@ Things in the cluster are identified by entities. Kubernetes `Pod`, `Job`, `Serv
137
137
138
138
Now that we have set up our connection properly, let's explore what we have in our cluster. Kubernetes publishes the status of its components, and we can take a look at it to see if everything is fine. We can use Kuber.jl APIs for that. To do that we use the `get` API to fetch the `ComponentStatus` entity.
Note that we got back a Julia type `Kuber.Kubernetes.IoK8sApiCoreV1ComponentStatusList`. It represents a list of `ComponentStatus` entities that we asked for. It has been resolved to match the specific to the version of API we used - `CoreV1` in this case. We can display the entity in JSON form in the REPL, by simply `show`ing it.
148
148
149
-
```
149
+
```julia
150
150
julia> result
151
151
{
152
152
"apiVersion": "v1",
@@ -174,7 +174,7 @@ julia> result
174
174
```
175
175
176
176
Or we can access it like a regular Julia type and look at individual fields:
Notice that APIs that fetch a list, have the entities in a field named `item`, and entities have thier name in the `metadata.name` field. We can list the namespaces available in the cluster, and now we can do it succintly as:
187
187
188
-
```
188
+
```julia
189
189
julia>collect(item.metadata.name for item in (get(ctx, :Namespace)).items)
190
190
3-element Array{String,1}:
191
191
"default"
@@ -195,14 +195,14 @@ julia> collect(item.metadata.name for item in (get(ctx, :Namespace)).items)
195
195
196
196
And similarly a list of pods:
197
197
198
-
```
198
+
```julia
199
199
julia>collect(item.metadata.name for item in (get(ctx, :Pod)).items)
200
200
0-element Array{Any,1}
201
201
```
202
202
203
203
We do not have any pods in the default namespace yet, because we have not started any! But we must have some system pods running in the "kube-system" namespace. We can switch namespaces and look into the "kube-system" namespace:
204
204
205
-
```
205
+
```julia
206
206
julia>set_ns(ctx, "kube-system")
207
207
208
208
julia>collect(item.metadata.name for item in (get(ctx, :Pod)).items)
@@ -220,7 +220,7 @@ julia> collect(item.metadata.name for item in (get(ctx, :Pod)).items)
220
220
221
221
There! Now let's get back to the default namespace and start something of our own. How about a nginx webserver that we can access over the internet? Kubernetes entities can be created from their JSON specification with the `kuber_obj` utility API provided with Kuber.jl.
To create the pod in the cluster, use the `put!` API. And we should see it when we list the pods.
264
264
265
-
```
265
+
```julia
266
266
julia> result =put!(ctx, nginx_pod);
267
267
268
268
julia>collect(item.metadata.name for item inget(ctx, :Pod).items)
@@ -272,7 +272,7 @@ julia> collect(item.metadata.name for item in get(ctx, :Pod).items)
272
272
273
273
We create the service, with an external LoadBalancer, to be able to access it from our browser:
274
274
275
-
```
275
+
```julia
276
276
julia> result =put!(ctx, nginx_service)
277
277
{
278
278
"apiVersion": "v1",
@@ -313,7 +313,7 @@ julia> result = put!(ctx, nginx_service)
313
313
314
314
Note that the `loadBalancer` status field is empty. It takes a while to hook up a load balancer to our service. We need to wait for it to be able to access our webserver!
315
315
316
-
```
316
+
```julia
317
317
julia>whiletrue
318
318
println("waiting for loadbalancer to be configured...")
319
319
sleep(30)
@@ -331,7 +331,7 @@ waiting for loadbalancer to be configured...
331
331
332
332
Our web server is up! And we can fetch a page from it.
333
333
334
-
```
334
+
```bash
335
335
shell> curl http://40.121.19.163/
336
336
<!DOCTYPE html>
337
337
<html>
@@ -364,13 +364,13 @@ Commercial support is available at
364
364
365
365
Once we are done, we can delete the entities we created in the cluster with the `delete!` API.
366
366
367
-
```
367
+
```julia
368
368
julia>delete!(ctx, :Service, "nginx-service");
369
369
julia>delete!(ctx, :Pod, "nginx-pod");
370
370
```
371
371
372
372
To delete the Kubernetes cluster, we can delete the resource group itself, which would also terminate the cluster created under it.
373
373
374
-
```
374
+
```bash
375
375
$ az group delete --name $RESOURCE_GROUP --yes --no-wait
0 commit comments