Skip to content

Commit 87a85a1

Browse files
committed
formatting improvements and standardisation changes
1 parent 02ce210 commit 87a85a1

File tree

4 files changed

+65
-47
lines changed

4 files changed

+65
-47
lines changed

content/blog/.markdownlint.json

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
{
2+
"MD033": {
3+
"allowed_elements": [
4+
"nobr",
5+
"sup",
6+
"p"
7+
]
8+
},
9+
"MD013": false
10+
}

content/blog/rpi-netboot-automation.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -190,7 +190,7 @@ configure_pxe_services() {
190190

191191
The `addresslist.txt` file serves as a record of processed MAC addresses. Before executing the setup for a new MAC address, the script checks this list to ensure it hasn't already been configured. This prevents unnecessary duplication of efforts. Create `addresslist.txt` in the `/opt` directory. Here's an example of how the `addresslist.txt` file might look:
192192

193-
```
193+
```text
194194
00-11-22-33-44-55
195195
aa-bb-cc-dd-ee-ff
196196
```

content/blog/rpi-netboot-deep-dive.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -11,14 +11,14 @@ weight: 1
1111

1212
### What is PXE Boot?
1313

14-
The Preboot Execution Environment (PXE) specification describes a standardized clientserver environment that boots a software assembly retrieved from a network on PXE-enabled clients.
14+
The Preboot Execution Environment (PXE) specification describes a standardized client-server environment that boots a software assembly retrieved from a network on PXE-enabled clients.
1515

1616
### Why PXE Boot?
1717

1818
- Simplify Pi provisioning and maintenance as much as possible.
1919
- Automate updates/upgrades as much as possible.
2020

21-
Netbooting is a good path to achieve this. For example, when you netboot a Pi, it does not require an SD card to boot. The OS and file system live on a central server. Because most of the provisioning happens on a central server, we can eventually automate it via scripts.
21+
Netbooting is a good path to achieve this. For example, when you netboot a Pi, it does not require an SD card to boot. The OS and file system live on a central server. Because most provisioning happens on a central server, we can eventually automate it via scripts.
2222

2323
### PXE Boot Sequence
2424

@@ -66,7 +66,7 @@ Netbooting is a good path to achieve this. For example, when you netboot a Pi, i
6666
To implement netbooting for Raspberry Pi devices, you'll need the following components:
6767

6868
- Raspberry Pi units acting as PXE clients connected via Ethernet to the router or switch.
69-
- Raspberry Pi acting as a PXE server containing boot files and user files connected via Ethernet to the router or switch.
69+
- Raspberry Pi as a PXE server containing boot files and user files connected via Ethernet to the router or switch.
7070

7171
## Configuration
7272

@@ -132,6 +132,7 @@ $ sudo mkdir -p /tftpboot/<MAC_ADDRESS>
132132
$ sudo chmod 777 /tftpboot
133133
$ sudo cp -r ~/bootfiles/* /tftpboot/<MAC_ADDRESS>
134134
```
135+
135136
**Important Note:** Replace `192.168.XX.XX` with the IP address of PXE server and `<MAC_ADDRESS>` with the MAC address of PXE client.
136137

137138
#### Configure `dnsmasq.conf`
@@ -156,4 +157,4 @@ EOF
156157

157158
### Conclusion
158159

159-
In conclusion, implementing PXE boot for Raspberry Pi devices streamlines deployment and maintenance by centralizing the boot process and file systems. Automation of these processes further enhances efficiency and reduces manual intervention. To learn more about automating Raspberry Pi deployment using PXE boot, click [here](https://www.infraspec.dev/blog/rpi-netboot-automation).
160+
In conclusion, implementing PXE boot for Raspberry Pi devices streamlines deployment and maintenance by centralizing the boot process and file systems. Automation of these processes further enhances efficiency and reduces manual intervention. To learn more about automating Raspberry Pi deployment using PXE boot, click [here](https://www.infraspec.dev/blog/rpi-netboot-automation).

content/blog/setting-up-ingress-on-eks.md

Lines changed: 49 additions & 42 deletions
Original file line numberDiff line numberDiff line change
@@ -11,31 +11,31 @@ weight: 1
1111
1212
* * * * *
1313

14-
**What is an Ingress?**
15-
-----------------------
14+
## What is an Ingress?
1615

1716
An Ingress in Kubernetes provides external access to your Kubernetes services. You configure access by creating a collection of routing rules that define which requests reach which services.
1817

1918
This lets you consolidate your routing rules into a single resource. For example, you might want to send requests to [mydomain.com/](http://example.com/api/v1)api1 to the api1 service, and requests to [mydomain.com/](http://example.com/api/v2)api2 to the api2 service. With an Ingress, you can easily set this up without creating a bunch of LoadBalancers or exposing each service on the Node.
2019

2120
*An Ingress provides the following:*
2221

23-
- Externally reachable URLs for applications deployed in Kubernetes clusters
24-
- Name-based virtual host and URI-based routing support
25-
- Load Balancing rules and traffic, as well as SSL termination
22+
- Externally reachable URLs for applications deployed in Kubernetes clusters
23+
- Name-based virtual host and URI-based routing support
24+
- Load Balancing rules and traffic, as well as SSL termination
2625

27-
**Kubernetes Service types -- an overview**
28-
------------------------------------------
26+
## Kubernetes Service types -- an overview
2927

3028
### ClusterIP
3129

3230
A ClusterIP service is the default service. It gives you a service inside your cluster that other apps inside your cluster can access. There is no external access.
3331

34-
![ClusterIP](/images/blog/setting-up-ingress-on-eks/cluster-ip.png)
32+
<p align="center">
33+
<img width="500px" src="/images/blog/setting-up-ingress-on-eks/cluster-ip.png" alt="ClusterIP">
34+
</p>
3535

3636
*The YAML for ClusterIP service looks like this:*
3737

38-
```shell
38+
```yaml
3939
apiVersion: v1
4040
kind: Service
4141
metadata:
@@ -55,15 +55,15 @@ spec:
5555

5656
Create the service:
5757

58-
```shell
58+
```bash
5959
$ kubectl apply -f nginx-svc.yaml
6060
service/nginx-service created
6161

6262
```
6363

6464
Check it:
6565

66-
```shell
66+
```bash
6767
$ kubectl get svc nginx-serice
6868
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
6969
nginx-service ClusterIP 172.20.54.138 <none> 80/TCP 38s
@@ -76,7 +76,9 @@ A NodePort service is the most primitive way to get external traffic directly to
7676

7777
When we create a Service of type NodePort, Kubernetes gives us a nodePort value. Then the Service is accessible by using the IP address of any node along with the nodePort value. In other words, when a user sets the Service type field to NodePort, the Kubernetes master allocates a static port from a range, and each Node will proxy that port (the same port number on every Node) into our Service.
7878

79-
![NodePort](/images/blog/setting-up-ingress-on-eks/node-port.png)
79+
<p align="center">
80+
<img width="500px" src="/images/blog/setting-up-ingress-on-eks/node-port.png" alt="NodePort">
81+
</p>
8082

8183
*The YAML for NodePort service looks like this:*
8284

@@ -128,11 +130,13 @@ In case of AWS -- will create an AWS Load Balancer, by default Classic type, whi
128130

129131
`LoadBalancer` type provides a Public IP address or DNS name to which the external users can connect. The traffic flows from the LoadBalancer to a mapped service on a designated port, which eventually forwards it to the healthy pods. Note that LoadBalancers doesn't have a direct mapping to the pods.
130132

131-
![LoadBalancer](/images/blog/setting-up-ingress-on-eks/load-balancer.png)
133+
<p align="center">
134+
<img width="400px" src="/images/blog/setting-up-ingress-on-eks/load-balancer.png" alt="LoadBalancer">
135+
</p>
132136

133137
*The YAML for LoadBalancer service looks this:*
134138

135-
```shell
139+
```yaml
136140
apiVersion: v1
137141
kind: Service
138142
metadata:
@@ -145,20 +149,19 @@ spec:
145149
ports:
146150
- name: http
147151
port: 80
148-
149152
```
150153
151154
Apply it:
152155
153-
```shell
156+
```yaml
154157
$ kubectl apply -f nginx-svc.yaml
155158
service/nginx-service created
156159

157160
```
158161

159162
Check:
160163

161-
```shell
164+
```yaml
162165
$ kubectl get svc nginx-service
163166
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
164167
nginx-service LoadBalancer 172.20.54.138 ac8415de24f6c4db9b5019f789792e45-443260761.ap-south-1.elb.amazonaws.com 80:30968/TCP 1h
@@ -167,25 +170,28 @@ nginx-service LoadBalancer 172.20.54.138 ac8415de24f6c4db9b5019f789792e45-
167170

168171
So, the *LoadBalancer* service type:
169172

170-
- will provide external access to pods
171-
- will provide a basic load-balancing to pods on different EC2
172-
- will give an ability to terminate SSL/TLS sessions
173-
- doesn't support level-7 routing
173+
- will provide external access to pods
174+
- will provide a basic load-balancing to pods on different EC2
175+
- will give an ability to terminate SSL/TLS sessions
176+
- doesn't support level-7 routing
174177

175178
*The big downside is that each service you expose with a LoadBalancer will get its own IP address, and you have to pay for a LoadBalancer per exposed service, which can get expensive!*
176179

177-
Ingress
178-
-------
180+
## Ingress
179181

180182
Unlike all the above examples, Ingress is actually NOT a dedicated Service. Instead, it sits in front of multiple services and act as a "smart router" or entrypoint into your cluster.
181183

182184
It just describes a set of rules for the Kubernetes Ingress Controller to create a Load Balancer, its Listeners, and routing rules for them.
183185

184186
An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type [Service.Type=NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) or [Service.Type=LoadBalancer](https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer).
185187

186-
![Ingress](/images/blog/setting-up-ingress-on-eks/ingress.png)
188+
<p align="center">
189+
<img width="500px" src="/images/blog/setting-up-ingress-on-eks/ingress.png" alt="Ingress">
190+
</p>
187191

188-
![Ingress managed LB](/images/blog/setting-up-ingress-on-eks/ingress-managed-lb.png)
192+
<p align="center">
193+
<img width="700px" src="/images/blog/setting-up-ingress-on-eks/ingress-managed-lb.png" alt="Ingress managed LB">
194+
</p>
189195

190196
_image source: https://kubernetes.io/docs/concepts/services-networking/ingress/_
191197

@@ -197,14 +203,14 @@ An [Ingress controller](https://kubernetes.io/docs/concepts/services-networking
197203

198204
Install:
199205

200-
```shell
206+
```bash
201207
$ kubectl apply -f <https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.1/deploy/static/provider/cloud/deploy.yaml>
202208

203209
```
204210

205211
Output:
206212

207-
```shell
213+
```bash
208214
namespace/ingress-nginx created
209215
serviceaccount/ingress-nginx created
210216
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
@@ -228,17 +234,17 @@ job.batch/ingress-nginx-admission-patch created
228234

229235
A few pods should start in the `ingress-nginx` namespace:
230236

231-
```shell
237+
```bash
232238
$ kubectl get pods --namespace=ingress-nginx
233239

234240
```
235241

236242
After a while, they should all be running. The following command will wait for the ingress controller pod to be up, running, and ready:
237243

238-
```shell
239-
$ kubectl wait --namespace ingress-nginx \\
240-
--for=condition=ready pod \\
241-
--selector=app.kubernetes.io/component=controller \\
244+
```bash
245+
$ kubectl wait --namespace ingress-nginx \
246+
--for=condition=ready pod \
247+
--selector=app.kubernetes.io/component=controller \
242248
--timeout=120s
243249

244250
```
@@ -249,7 +255,7 @@ This has set up the Nginx Ingress Controller. Now, we can create Ingress resourc
249255

250256
First, let's create two services to demonstrate how the Ingress routes our request. We'll run two web applications that output a slightly different response.
251257

252-
```shell
258+
```yaml
253259
apiVersion: v1
254260
kind: Pod
255261
metadata:
@@ -278,7 +284,7 @@ spec:
278284

279285
```
280286

281-
```shell
287+
```yaml
282288
apiVersion: v1
283289
kind: Pod
284290
metadata:
@@ -309,15 +315,15 @@ spec:
309315

310316
Create the resources
311317

312-
```shell
318+
```bash
313319
$ kubectl apply -f apple.yaml
314320
$ kubectl apply -f banana.yaml
315321

316322
```
317323

318324
Now, declare an Ingress to route requests to `/apple` to the first service, and requests to `/banana`  to second service.
319325

320-
```shell
326+
```yaml
321327
apiVersion: networking.k8s.io/v1
322328
kind: Ingress
323329
metadata:
@@ -349,7 +355,7 @@ Here we set two rules: if URI == */apple* or */banana*  -- then send the traffi
349355

350356
Deploy it:
351357

352-
```shell
358+
```bash
353359
$ kubectl apply -f ingress.yaml
354360

355361
```
@@ -358,7 +364,7 @@ Another example is the hostname-based routing.
358364

359365
*Update the manifest:*
360366

361-
```shell
367+
```yaml
362368
apiVersion: networking.k8s.io/v1
363369
kind: Ingress
364370
metadata:
@@ -393,13 +399,15 @@ The Ingress controller provisions an implementation-specific load balancer that
393399

394400
Name-based virtual hosts support routing HTTP traffic to multiple host names at the same IP address.
395401

396-
![Name based virtual hosting](/images/blog/setting-up-ingress-on-eks/name-based-virtual-hosting.png)
402+
<p align="center">
403+
<img width="500px" src="/images/blog/setting-up-ingress-on-eks/name-based-virtual-hosting.png" alt="Name based virtual hosting">
404+
</p>
397405

398406
image source: https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting
399407

400408
The following Ingress tells the backing load balancer to route requests based on the Host header.
401409

402-
```shell
410+
```yaml
403411
apiVersion: networking.k8s.io/v1
404412
kind: Ingress
405413
metadata:
@@ -431,8 +439,7 @@ spec:
431439

432440
Here we left Services without changes, but in the Rules, we set that a request to the *[apple.service.com](http://apple.service.com)* must be sent to the *apple-service* and *[banana.service.com](http://banana.service.com)* to the banana-service.
433441

434-
Summary
435-
-------
442+
## Summary
436443

437444
A Kubernetes Ingress is a robust way to expose your services outside the cluster. It lets you consolidate your routing rules to a single resource, and gives you powerful options for configuring these rules.
438445

0 commit comments

Comments
 (0)