Skip to content

Commit e7c35eb

Browse files
authored
color code markdown [ci skip]
1 parent 721086f commit e7c35eb

File tree

1 file changed

+20
-20
lines changed

1 file changed

+20
-20
lines changed

WalkThrough.md

Lines changed: 20 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -8,14 +8,14 @@ In this article, we shall launch a Kubernetes cluster on Azure and use it from J
88

99
Start by deciding on the cluster name and region to launch it in.
1010

11-
```
11+
```bash
1212
$ RESOURCE_GROUP=akstest
1313
$ LOCATION=eastus
1414
$ CLUSTERNAME="${RESOURCE_GROUP}cluster"
1515
```
1616

1717
Create a Resource Group to hold our AKS instance
18-
```
18+
```bash
1919
$ az group create --name $RESOURCE_GROUP --location $LOCATION
2020
{
2121
"id": "/subscriptions/b509b56d-725d-4625-a5d3-ae68c72f15b9/resourceGroups/akstest",
@@ -30,7 +30,7 @@ $ az group create --name $RESOURCE_GROUP --location $LOCATION
3030
```
3131

3232
Use the `az aks create` command to launch the cluster. We start only one node for this example, and use the defaults for most other options.
33-
```
33+
```bash
3434
$ az aks create --resource-group $RESOURCE_GROUP --name $CLUSTERNAME --node-count 1 --enable-addons monitoring --generate-ssh-keys
3535
{
3636
...
@@ -49,7 +49,7 @@ $ az aks create --resource-group $RESOURCE_GROUP --name $CLUSTERNAME --node-coun
4949
```
5050

5151
To make it easy for us to connect to it from our local machine, fetch the cluster credentials and start a `kubectl proxy` process.
52-
```
52+
```bash
5353
$ az aks install-cli
5454
$ az aks get-credentials --resource-group $RESOURCE_GROUP --name $CLUSTERNAME
5555
$ kubectl proxy
@@ -60,7 +60,7 @@ Starting to serve on 127.0.0.1:8001
6060

6161
We shall now try and explore the cluster using Kuber.jl. First, add the package and load it on to the Julia REPL.
6262

63-
```
63+
```julia
6464
julia> using Pkg
6565

6666
julia> Pkg.add("Kuber")
@@ -70,14 +70,14 @@ julia> using Kuber
7070

7171
A cluster connection is represented by a `KuberContext` instance. A context encapsulates the TCP connection to the cluster, and connection options. By default, Kuber.jl attempts to connect to a local port opened through `kubectl proxy`. If you followed all the steps in the previous section to start the Azure AKS cluster, you would have already done that.
7272

73-
```
73+
```julia
7474
julia> ctx = KuberContext()
7575
Kubernetes namespace default at http://localhost:8001
7676
```
7777

7878
Kubernets has multiple API sets and revisions thereof. We need to set the appropriate API versions to interact with our server. The `set_api_versions!` call detects the API versions supported and preferred by the server. They are stored in the context object to be used in subsequent communitcation.
7979

80-
```
80+
```julia
8181
julia> Kuber.set_api_versions!(ctx; verbose=true)
8282
┌ Info: Core versions
8383
│ supported = "v1"
@@ -137,7 +137,7 @@ Things in the cluster are identified by entities. Kubernetes `Pod`, `Job`, `Serv
137137

138138
Now that we have set up our connection properly, let's explore what we have in our cluster. Kubernetes publishes the status of its components, and we can take a look at it to see if everything is fine. We can use Kuber.jl APIs for that. To do that we use the `get` API to fetch the `ComponentStatus` entity.
139139

140-
```
140+
```julia
141141
julia> result = get(ctx, :ComponentStatus);
142142

143143
julia> typeof(result)
@@ -146,7 +146,7 @@ Kuber.Kubernetes.IoK8sApiCoreV1ComponentStatusList
146146

147147
Note that we got back a Julia type `Kuber.Kubernetes.IoK8sApiCoreV1ComponentStatusList`. It represents a list of `ComponentStatus` entities that we asked for. It has been resolved to match the specific to the version of API we used - `CoreV1` in this case. We can display the entity in JSON form in the REPL, by simply `show`ing it.
148148

149-
```
149+
```julia
150150
julia> result
151151
{
152152
"apiVersion": "v1",
@@ -174,7 +174,7 @@ julia> result
174174
```
175175

176176
Or we can access it like a regular Julia type and look at individual fields:
177-
```
177+
```julia
178178
julia> for item in result.items
179179
println(item.metadata.name, " ", item.conditions[1]._type, " => ", item.conditions[1].status)
180180
end
@@ -185,7 +185,7 @@ etcd-0 Healthy => True
185185

186186
Notice that APIs that fetch a list, have the entities in a field named `item`, and entities have thier name in the `metadata.name` field. We can list the namespaces available in the cluster, and now we can do it succintly as:
187187

188-
```
188+
```julia
189189
julia> collect(item.metadata.name for item in (get(ctx, :Namespace)).items)
190190
3-element Array{String,1}:
191191
"default"
@@ -195,14 +195,14 @@ julia> collect(item.metadata.name for item in (get(ctx, :Namespace)).items)
195195

196196
And similarly a list of pods:
197197

198-
```
198+
```julia
199199
julia> collect(item.metadata.name for item in (get(ctx, :Pod)).items)
200200
0-element Array{Any,1}
201201
```
202202

203203
We do not have any pods in the default namespace yet, because we have not started any! But we must have some system pods running in the "kube-system" namespace. We can switch namespaces and look into the "kube-system" namespace:
204204

205-
```
205+
```julia
206206
julia> set_ns(ctx, "kube-system")
207207

208208
julia> collect(item.metadata.name for item in (get(ctx, :Pod)).items)
@@ -220,7 +220,7 @@ julia> collect(item.metadata.name for item in (get(ctx, :Pod)).items)
220220

221221
There! Now let's get back to the default namespace and start something of our own. How about a nginx webserver that we can access over the internet? Kubernetes entities can be created from their JSON specification with the `kuber_obj` utility API provided with Kuber.jl.
222222

223-
```
223+
```julia
224224
julia> nginx_pod = kuber_obj(ctx, """{
225225
"kind": "Pod",
226226
"metadata":{
@@ -262,7 +262,7 @@ Kuber.Kubernetes.IoK8sApiCoreV1Service
262262

263263
To create the pod in the cluster, use the `put!` API. And we should see it when we list the pods.
264264

265-
```
265+
```julia
266266
julia> result = put!(ctx, nginx_pod);
267267

268268
julia> collect(item.metadata.name for item in get(ctx, :Pod).items)
@@ -272,7 +272,7 @@ julia> collect(item.metadata.name for item in get(ctx, :Pod).items)
272272

273273
We create the service, with an external LoadBalancer, to be able to access it from our browser:
274274

275-
```
275+
```julia
276276
julia> result = put!(ctx, nginx_service)
277277
{
278278
"apiVersion": "v1",
@@ -313,7 +313,7 @@ julia> result = put!(ctx, nginx_service)
313313

314314
Note that the `loadBalancer` status field is empty. It takes a while to hook up a load balancer to our service. We need to wait for it to be able to access our webserver!
315315

316-
```
316+
```julia
317317
julia> while true
318318
println("waiting for loadbalancer to be configured...")
319319
sleep(30)
@@ -331,7 +331,7 @@ waiting for loadbalancer to be configured...
331331

332332
Our web server is up! And we can fetch a page from it.
333333

334-
```
334+
```bash
335335
shell> curl http://40.121.19.163/
336336
<!DOCTYPE html>
337337
<html>
@@ -364,13 +364,13 @@ Commercial support is available at
364364

365365
Once we are done, we can delete the entities we created in the cluster with the `delete!` API.
366366

367-
```
367+
```julia
368368
julia> delete!(ctx, :Service, "nginx-service");
369369
julia> delete!(ctx, :Pod, "nginx-pod");
370370
```
371371

372372
To delete the Kubernetes cluster, we can delete the resource group itself, which would also terminate the cluster created under it.
373373

374-
```
374+
```bash
375375
$ az group delete --name $RESOURCE_GROUP --yes --no-wait
376376
```

0 commit comments

Comments
 (0)