Skip to content

Commit 620a8d4

Browse files
committed
updates readme
1 parent 1da1bd6 commit 620a8d4

File tree

10 files changed

+121
-37
lines changed

10 files changed

+121
-37
lines changed

README.md

Lines changed: 93 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -12,10 +12,11 @@ This repo builds out and runs requests against a NATS SuperCluster running in AW
1212
- `/eks-setup`: OpenTofu and Kubernetes configuration files specific to the AWS environment running in `us-east-1`
1313
- `/k8s-configs`: the Kubernetes configuration files used to deploy services into both environments
1414
- `/services`: the Python code for the various services that are used for the demo
15+
- `/jetstream`: OpenTofu configurations for managing Streams and Consumers in JetStream
1516

1617
## Some FYI
1718

18-
This is project meant for demo purposes only. While it employs authenticationa and authorization for all of the exposed endpoints that it creates, the authentication credentials are stored in this repo and are therefore insecure. Deploy this repo only in ad hoc environments that you can stand up and tear down without impacting your production environments. **Do not run this project in its current stat in production.**
19+
This is project meant for demo purposes only. While it employs authenticationa and authorization for all of the exposed endpoints that it creates, the authentication credentials are stored in this repo and are therefore insecure. Deploy this repo only in ad hoc environments that you can stand up and tear down without impacting your production environments. **Do not run this project in its current state in production.**
1920

2021
## Setting up the environments
2122

@@ -29,7 +30,7 @@ Both the Azure and AWS environments are set up using a Bash script, which can be
2930

3031
To set up the Azure environment, `cd` into the `/aks-setup` folder and run:
3132

32-
```
33+
```sh
3334
sh aks-setup.sh
3435
```
3536

@@ -42,7 +43,7 @@ This will:
4243

4344
To set up the AWS environment, `cd` into the `/eks-setup` folder and run:
4445

45-
```
46+
```sh
4647
sh eks-setup.sh
4748
```
4849

@@ -58,23 +59,13 @@ This will:
5859

5960
When AWS adds a new context to your default kubeconfig, it uses the AWS CLI to authenticate each request made by kubectl. If you want to use a Kubernetes UI like [Lens](https://k8slens.dev/) to interact with your clusters, you'll need to create a separate kubeconfig file. The script and K8s config file necessary to do that have been provided in `/eks-setup/kubeconfig-setup`. Assuming you have the EKS cluster set as your current context, run the following from the `/eks-setup` directory:
6061

61-
```
62+
```sh
6263
kubectl apply -f kubeconfig-setup/admin-sa.yaml
6364
sh kubeconfig-setup/create-sa-token.sh
6465
```
6566

6667
That will create a kubeconfig file tied to a ServiceAccount that has cluster-admin priviliges. You can then load that into Lens as a new kubeconfig file.
6768

68-
## Deploying the Services
69-
70-
When configured against either context, you can deploy the services in bulk by running:
71-
72-
```
73-
kubectl apply -f k8s-configs
74-
```
75-
76-
This will run all four mathmatical services as Deployments with 3 instances, as well as the requester Job. Note that the requester Job will deploy and possibly run before the other services are ready, so you may need to run it again to see intended results.
77-
7869
## NATS Gateways and Cluster Connectivity
7970

8071
In a production environment, you'd likely already have a domain name associated with your K8s cluster ingress that you can map to the service used by the NATS servers.
@@ -87,15 +78,15 @@ Again, in production, you likely wouldn't have to do this.
8778

8879
The quickest way to get this information is to run the following against the AWS context:
8980

90-
```
81+
```sh
9182
kubectl get svc nats-east -o json | jq .status.loadBalancer.ingress
9283
```
9384

9485
That will print out a hostname that you can use as the endpoint for your gateway in AWS.
9586

9687
For Azure:
9788

98-
```
89+
```sh
9990
kubectl get svc nats-east -o json | jq .status.loadBalancer.ingress
10091
```
10192

@@ -105,6 +96,88 @@ If you don't have `jq` installed, you can find it here: https://jqlang.org/
10596

10697
Otherwise you can just run `kubectl get svc nats-east -o json` and sift through the output.
10798

99+
## TLS Connections
100+
101+
This project deviates from the [previous one](https://github.com/colinjlacy/nats-cluster-demo) by [using mTLS](https://docs.nats.io/running-a-nats-service/configuration/securing_nats/tls) to connect services to the NATS resources running in each Kubernetes cluster.
102+
103+
**Please keep in mind that this approach is for demo purposes, and is not necessarily recommended for production use. Check with your team and business to see what security and compliance requirements exist for using TLS certs within your organization.**
104+
105+
To get started, deploy cert-manager into each Kubernetes cluster. You can find installation instructions [on the cert-manager website](https://cert-manager.io/docs/installation/helm/).
106+
107+
Once you have that installed, you can start creating certs to match your deployment stack. Each Kubernetes cluster requires its own cert heirarchy, since they'll each use TLS for connecting internally. In a production setting, you would use a common Certificate Authority (CA) to create certs within each cluster, so that they could all reference the same heirarchy. For now, and for this demo, cluster-specific CA's will do.
108+
109+
In each setup folder, you'll see a YAML file for setting up certs - `nats-tls-east.yaml` in `eks-setup/` and `nats-tls-west.yaml` in `aks-setup/`. Each one is *almost* ready to use. However they each need the public endpoint of the load balancer used to expose NATS to the public internet. This is important because without this, you won't be able to create a NATS context in the CLI, nor connect locally using OpenTofu to create JetStream resources.
110+
111+
In `eks-setup/nats-tls-east.yaml`, populate line 55 with the hostname of the NLB that was used to expose NATS.
112+
113+
In `aks-setup/nats-tls-west.yaml`, populate line 56 with the IP of the Azure Load Balancer that was used to expose NATS.
114+
115+
Now apply each one to their respective clusters. It should create a series of Secret resources in each cluster. Note that all of the service Deployment and Job YAML files have been updated to reference these Secret resources and pull in their values. You don't have to do anything there.
116+
117+
## JetStream Streams and Consumers
118+
119+
With TLS certs all set up, you can start to set up JetStream. First, you'll need to pull down the values in the `nats-admin-tls` Secret, and store each in the respective cluster's folder. So, for example, for EKS you would pull each entry from `secrets/nats-admin-tls` - `tls.ca`, `tls.crt`, and `tls.key` - and store each one in a file bearing that name in `jetstream/eks/`. The environment folders have a `.gitkeep` to make sure they are there when you pull down this repo. Once your done, there should be three files in each folder, e.g.:
120+
121+
- `jetstream/eks/tls.ca`
122+
- `jetstream/eks/tls.cert`
123+
- `jetstream/eks/tls.key`
124+
125+
**You'll also need to populate the different `tfvars` files with the public endpoint of the NATS server, provided by the cloud load balancer - the NLB hostname for EKS, and the IP for Azure.** A placeholder and comment has been added to the top of each `tfvars` file, and needs to be replaced.
126+
127+
With those in place you should be able to run the OpenTofu configs, for example, against EKS:
128+
129+
```sh
130+
tofu init --var-file=eks.tfvars
131+
tofu plan --out plan --var-file eks.tfvars
132+
tofu apply plan
133+
```
134+
135+
## Deploying the Services
136+
137+
When configured against either context, you can deploy the services in bulk by running:
138+
139+
```sh
140+
kubectl apply -f k8s-configs
141+
```
142+
143+
This will run all four mathmatical services as Deployments with 3 instances, as well as the requester Job. Note that the requester Job will deploy and possibly run before the other services are ready, so you may need to run it again to see intended results:
144+
145+
```sh
146+
kubectl replace -f k8s-configs/requester.yaml --force
147+
```
148+
149+
## NATS Context
150+
151+
The NATS CLI requires a context to connect to when it needs to communicate with a remote cluster. In order to set it up, we can use the same TLS credentials that we used in OpenTofu. We'll use the absolute paths for the TLS files, so that when you run the NATS CLI against this context, you can run it from anywhere in your file structure. The following is for creating a context against the Azure load balancer:
152+
153+
```sh
154+
nats context add west --select --tlsca=/absolute/path/to/jetstream/aks/tls.ca --tlscert=/absolute/path/to/jetstream/aks/tls.crt --tlskey=/absolute/path/to/jetstream/aks/tls.key -s <azure-lb-ip> --description=Azure_US_West
155+
```
156+
157+
You'll want to replace the file paths with your own file paths, and replace the `<azure-lb-ip>` placeholder with the actual IP of the Azure LB. You would then do the same for EKS, pointing to the AWS NLB hostname for the `-s` value, and using the file paths for the EKS TLS files.
158+
159+
## Running the Recorder in Kubernetes
160+
161+
The Recorder service has its own deployment YAML that will work against a properly set up K8s cluster. However you'll have to make sure that there's a MySQL database accessible from he Kubernetes cluster in order to properly run it.
162+
163+
For that I used the [Bitnami MySQL Helm Chart](https://artifacthub.io/packages/helm/bitnami/mysql).
164+
165+
Whatever solution you decide to go with, be sure to update the ConfigMap in `k8s-configs/recorder.yaml`.
166+
167+
## Running the Recorder Locally
168+
169+
This is a bit more involved. You'll need to have a MySQL database stood up locally for the Recorder to connect to. I used a simple local installation of MySQL Server, which allowed for unauthenticated connectivity. Whatever solution you choose, be sure to update the default values in `services/recorder/main.py`.
170+
171+
**Remember to create the schema and tables that will be used for storing data!**
172+
173+
You'll also need to pull down the TLS credentials that the Recorder service uses, which are stored in `secrets/nats-recorder-tls` in your Kubernetes cluster. Those are expected to be stored in their respective files alongside the `main.py` in the `services/recorder` folder, so:
174+
175+
- services/recorder/tls.ca
176+
- services/recorder/tls.cert
177+
- services/recorder/tls.key
178+
179+
With those all set up, you can run the Recorder
180+
108181
## Environment teardown
109182

110183
The Azure environment can be entirely torn down by `cd`ing into the `aks-setup/tofu-aks` folder and running:
@@ -124,6 +197,7 @@ sh eks-teardown.sh
124197

125198
- NATS Helm Chart: https://github.com/nats-io/k8s/blob/main/helm/charts/nats/README.md
126199
- NATS By Example: https://natsbyexample.com/examples/services/intro/go
127-
- Services Framwork documentation: https://docs.nats.io/using-nats/nex/getting-started/building-service
128-
- Cluster documentation: https://docs.nats.io/running-a-nats-service/configuration/clustering
129-
- SuperCluster documentation: https://docs.nats.io/running-a-nats-service/configuration/gateways
200+
- NATS TLS Setup Example: https://github.com/nats-io/nack/tree/main/examples/secure
201+
- NATS docs on JetStream: https://docs.nats.io/nats-concepts/jetstream
202+
- NATS JetStream OpenTofu provider: https://search.opentofu.org/provider/nats-io/jetstream/latest
203+

aks-setup/aks-nats-values.yaml

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,8 @@ config:
3434
name: "nats-west"
3535
enabled: true
3636
merge:
37-
advertise: "172.178.141.38:7222"
37+
# populate with your NATS server address in Azure
38+
advertise: ""
3839
authorization:
3940
user: natsgate
4041
password: NATSC!u5t3rGa73way
@@ -45,9 +46,11 @@ config:
4546
# and updated the values entries with the correct endpoints.
4647
gateways:
4748
- name: "nats-east"
48-
url: nats://natsgate:NATSC!u5t3rGa73way@k8s-default-natseast-d3a2cc2411-682b3011270d1d56.elb.us-east-1.amazonaws.com:7222
49+
# populate with your NATS server address in EKS
50+
url: ""
4951
- name: "nats-west"
50-
url: nats://natsgate:NATSC!u5t3rGa73way@172.178.141.38:7222
52+
# populate with your NATS server address in Azure
53+
url: ""
5154
merge:
5255
accounts:
5356
SYS:

aks-setup/nats-tls-west.yaml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,8 @@ spec:
5252
- '*.nats-west.default.svc'
5353
- '*.nats-west.default.svc.cluster.local'
5454
ipAddresses:
55-
- '172.178.141.38'
55+
# populate with your NATS server IP in Azure
56+
- ''
5657
---
5758
# NATS system user TLS certificate
5859
apiVersion: cert-manager.io/v1

eks-setup/eks-nats-values.yaml

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,7 @@ config:
3131
name: "nats-east"
3232
enabled: true
3333
merge:
34+
# populate with your NATS server address in EKS
3435
advertise: "k8s-default-natseast-d3a2cc2411-682b3011270d1d56.elb.us-east-1.amazonaws.com:7222"
3536
authorization:
3637
user: natsgate
@@ -42,9 +43,11 @@ config:
4243
# and updated the values entries with the correct endpoints.
4344
gateways:
4445
- name: "nats-east"
45-
url: nats://natsgate:NATSC!u5t3rGa73way@k8s-default-natseast-d3a2cc2411-682b3011270d1d56.elb.us-east-1.amazonaws.com:7222
46+
# populate with your NATS server address in EKS
47+
url: ""
4648
- name: "nats-west"
47-
url: nats://natsgate:NATSC!u5t3rGa73way@172.178.141.38:7222
49+
# populate with your NATS server address in Azure
50+
url: ""
4851
merge:
4952
accounts:
5053
SYS:

eks-setup/nats-tls-east.yaml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,8 @@ spec:
5151
- '*.nats-east.default'
5252
- '*.nats-east.default.svc'
5353
- '*.nats-east.default.svc.cluster.local'
54-
- 'k8s-default-natseast-d3a2cc2411-682b3011270d1d56.elb.us-east-1.amazonaws.com'
54+
# populate with your NATS server address in EKS
55+
- ''
5556
---
5657
# NATS system user TLS certificate
5758
apiVersion: cert-manager.io/v1

jetstream/aks.tfvars

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
1-
nats_servers = "172.178.141.38:4222"
1+
# populate with your NATS server address
2+
nats_servers = ""
23
nats_tls_ca_file = "aks/tls.ca"
34
nats_tls_cert_file = "aks/tls.crt"
45
nats_tls_key_file = "aks/tls.key"

jetstream/aks/.gitkeep

Whitespace-only changes.

jetstream/eks.tfvars

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
1-
nats_servers = "k8s-default-natseast-d3a2cc2411-682b3011270d1d56.elb.us-east-1.amazonaws.com:4222"
1+
# populate with your NATS server address
2+
nats_servers = ""
23
nats_tls_ca_file = "eks/tls.ca"
34
nats_tls_cert_file = "eks/tls.crt"
45
nats_tls_key_file = "eks/tls.key"

jetstream/eks/.gitkeep

Whitespace-only changes.

jetstream/main.tf

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -4,16 +4,16 @@ resource "jetstream_stream" "answers" {
44
subjects = ["answers.significant", "answers.throwaway"]
55
storage = "file"
66
# max_bytes = 1028
7-
# max_msgs = 30
7+
# max_msgs = 5
88
# max_age = 20
99
# duplicate_window = 10
10-
# max_age = 60 * 60 * 24 * 5
10+
max_age = 60 * 60 * 24 * 5
1111
}
1212

13-
# resource "jetstream_consumer" "recorder" {
14-
# stream_id = jetstream_stream.answers.id
15-
# durable_name = "answers-consumer"
16-
# deliver_all = true
17-
# filter_subjects = ["answers.significant"]
18-
# sample_freq = 100
19-
# }
13+
resource "jetstream_consumer" "recorder" {
14+
stream_id = jetstream_stream.answers.id
15+
durable_name = "answers-consumer"
16+
deliver_all = true
17+
filter_subjects = ["answers.significant"]
18+
sample_freq = 100
19+
}

0 commit comments

Comments
 (0)