Skip to content

Commit 9d1452a

Browse files
authored
Merge pull request #51 from superstreamlabs/staging
Release v1.5.0
2 parents e1ef1f4 + 627e910 commit 9d1452a

File tree

4 files changed

+114
-43
lines changed

4 files changed

+114
-43
lines changed

README.md

Lines changed: 72 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -226,8 +226,55 @@ helmfile -e default diff
226226
``` bash
227227
helmfile -e default apply
228228
```
229+
## Appendix C - Superstream Client Configuration
229230

230-
## Appendix C - Uninstall
231+
**Client Connection**
232+
233+
To establish a connection of a new client, the data plane's Fully Qualified Domain Name (FQDN) for the Superstream service must be accessible and exposed. Below are the connection procedures based on the client’s location relative to the Superstream service:
234+
235+
1. For clients in environments like AWS EKS, you can expose the Superstream service using a LoadBalancer. Below is an example of the required service configuration in a YAML file (svc.yaml):
236+
```yaml
237+
apiVersion: v1
238+
kind: Service
239+
metadata:
240+
name: superstream-host-external
241+
annotations:
242+
service.beta.kubernetes.io/aws-load-balancer-type: external
243+
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
244+
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
245+
service.beta.kubernetes.io/aws-load-balancer-name: superstream-host-external
246+
spec:
247+
ports:
248+
- name: superstream-host-external
249+
port: 4222
250+
protocol: TCP
251+
targetPort: 4222
252+
selector:
253+
app.kubernetes.io/component: nats
254+
app.kubernetes.io/instance: nats
255+
app.kubernetes.io/name: nats
256+
type: LoadBalancer
257+
```
258+
259+
2. To deploy this configuration, use the following command, replacing <NAMESPACE> with the appropriate namespace:
260+
```bash
261+
kubectl apply -f svc.yaml -n <NAMESPACE>
262+
```
263+
264+
3. Validate that the load balancer is created successfully and the FQDN is created:
265+
```bash
266+
$ kubectl get svc superstream-host-external -n <NAMESPACE>
267+
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
268+
superstream-host-external LoadBalancer 10.100.100.100 superstream-host-external.elb.us-east-1.amazonaws.com 4222:32074/TCP 1d
269+
```
270+
271+
The exposed FQDN should be used together with the provided activation token with the following variables in the client configuration:
272+
```yaml
273+
SUPERSTREAM_HOST=<FQDN>
274+
SUPERSTREAM_TOKEN=<ACTIVATION_TOKEN>
275+
```
276+
277+
## Appendix D - Uninstall
231278

232279
**Steps to Uninstall Superstream Data Plane Deployment.**
233280

@@ -244,58 +291,46 @@ It's crucial to delete the stateful storage linked to the data plane. Ensure you
244291
kubectl delete pvc -l app.kubernetes.io/instance=nats -n <NAMESPACE>
245292
```
246293

247-
## Appendix D - Custom changes to the helmfile
294+
## Appendix E - Custom changes
248295

249-
**StorageClass definition for NATS service**
296+
**StorageClass definition**
250297

251-
If there is no default storageClass configured for the Kubernetes cluster, it should be configured manually from the helmfile.yaml.
298+
If there is no default storageClass configured for the Kubernetes cluster or there is a need to choose a custom storageClass, it can be done by specifying its name in the `environments/default.yaml` file.
252299

253-
1. Open helmfile.yaml with preferred editor and navigate to the nats configuration section:
300+
1. Open `environments/default.yaml` with a preferred editor:
254301

255302
```yaml
256-
releases:
257-
- name: {{ .Values.natsReleaseName }}
258-
installed: true
259-
namespace: {{ .Values.namespace }}
260-
chart: nats/nats
261-
version: 1.1.7
262-
values:
263-
- container:
264-
image:
265-
tag: alpine3.19
266-
env:
267-
ACTIVATION_TOKEN:
268-
valueFrom:
269-
secretKeyRef:
270-
name: superstream-creds
271-
key: ACTIVATION_TOKEN
272-
- promExporter:
273-
enabled: true
274-
- natsBox:
275-
enabled: false
276-
- config:
277-
cluster:
278-
enabled: {{ .Values.haDeployment }}
279-
jetstream:
280-
enabled: true
303+
helmVersion: 0.2.3 # Define the version of the superstream helm chart.
304+
namespace: superstream # Specify the Kubernetes namespace where the resources will be deployed, isolating them within this designated namespace.
305+
storageClassName: "" # Leave blank if you want to use default K8s cluster storageClass
306+
...
281307
```
282308

283-
2. Add the following section after `jetsream.enabled` line and mention the name of the desired storageClass:
309+
2. Fill the name of the desired storageClass name.
284310

285311
```yaml
286-
jetstream:
287-
enabled: true
288-
fileStore:
289-
pvc:
290-
storageClassName: <THE_NAME>
312+
helmVersion: 0.2.3 # Define the version of the superstream helm chart.
313+
namespace: superstream # Specify the Kubernetes namespace where the resources will be deployed, isolating them within this designated namespace.
314+
storageClassName: "exampleSsdStorageClass" # Leave blank if you want to use default K8s cluster storageClass
315+
...
291316
```
292317

293-
3. Run the deployment
318+
3. Run the deployment.
294319

295320
```bash
296321
helmfile -e default apply
297322
```
298323

324+
4. Validate that the created PVCs are assigned to the desired storageClass.
325+
326+
```bash
327+
kubectl get pvc -n superstream
328+
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
329+
nats-js-nats-0 Bound pvc-ac65bfe7 10Gi RWO exampleSsdStorageClass 45h
330+
nats-js-nats-1 Bound pvc-d3982397 10Gi RWO exampleSsdStorageClass 45h
331+
nats-js-nats-2 Bound pvc-e85b69e0 10Gi RWO exampleSsdStorageClass 45h
332+
```
333+
299334
## Disable HPA - autoscalling ability of the Data Plane service
300335

301336
If no autoscaling CRD is configured for the Kubernetes cluster, it should be configured manually from the helmfile.yaml.

environments/default.yaml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
1-
helmVersion: 0.2.3 # Define the version of the superstream helm chart.
1+
helmVersion: 0.3.0 # Define the version of the superstream helm chart.
22
namespace: superstream # Specify the Kubernetes namespace where the resources will be deployed, isolating them within this designated namespace.
3+
storageClassName: "" # Leave blank if you want to use default K8s cluster storageClass
34
name: <DATAPLANE_NAME> # Define the data plane name within 32 characters, excluding '.', and using only letters, numbers, '-', and '_'.
45
accountId: "" # Provide the account ID that is associated with the deployment, which could be used for identifying resources or configurations tied to a specific account.
56
activationToken: "" # Enter the activation token required for services or resources that need an initial token for activation or authentication.

helmfile.yaml

Lines changed: 39 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@ environments:
1212
values:
1313
- ./environments/default.yaml
1414
- haDeployment: true
15+
- skipLocalAuthentication: true
1516
- requestCpu: '2'
1617
- requestMemory: 2Gi
1718
- natsReleaseName: nats # Set the release name for the NATS deployment, which is used to uniquely identify the set of resources deployed for NATS.
@@ -50,15 +51,20 @@ releases:
5051
enabled: {{ .Values.haDeployment }}
5152
jetstream:
5253
enabled: true
54+
{{ if .Values.storageClassName }}
55+
fileStore:
56+
pvc:
57+
storageClassName: {{ .Values.storageClassName }}
58+
{{ end }}
5359
merge: {
5460
accounts: {
5561
SYS: {
56-
users: [{user: superstream_sys, password: << $ACTIVATION_TOKEN >>}]
62+
users: [{user: superstream_sys, password: {{ if .Values.skipLocalAuthentication }}"no-auth"{{ else }}"<< $ACTIVATION_TOKEN >>"{{ end }}}]
5763
# System account allows subscribing to $SYS.>
5864
},
5965
internal: {
6066
jetstream: enable,
61-
users: [{user: superstream_internal, password: << $ACTIVATION_TOKEN >>}]
67+
users: [{user: superstream_internal, password: {{ if .Values.skipLocalAuthentication }}"no-auth"{{ else }}"<< $ACTIVATION_TOKEN >>"{{ end }}}]
6268
# Regular user account for clients
6369
}
6470
},
@@ -87,6 +93,22 @@ releases:
8793
- pods
8894
verbs: ["get", "list", "watch"]
8995
- config:
96+
agent:
97+
interval: "10s"
98+
round_interval: true
99+
metric_batch_size: 1000
100+
metric_buffer_limit: 10000
101+
collection_jitter: "0s"
102+
flush_interval: "10s"
103+
flush_jitter: "0s"
104+
precision: ""
105+
debug: false
106+
quiet: false
107+
logfile: "/tmp/telegraf.log"
108+
logfile_rotation_max_size: "10MB"
109+
logfile_rotation_max_archives: 5
110+
hostname: "$HOSTNAME"
111+
omit_hostname: false
90112
outputs:
91113
- influxdb_v2:
92114
urls:
@@ -96,7 +118,7 @@ releases:
96118
content_encoding: "gzip"
97119
http_headers:
98120
Authorization: "{{ .Values.activationToken }}"
99-
namepass: ["syslog"]
121+
namepass: ["syslog","telegraf_logs"]
100122
- influxdb_v2:
101123
urls:
102124
- "https://superstream-monitoring.mgmt.memphis-gcp.dev"
@@ -105,7 +127,7 @@ releases:
105127
content_encoding: "gzip"
106128
http_headers:
107129
Authorization: "{{ .Values.activationToken }}"
108-
namedrop: ["syslog"]
130+
namedrop: ["syslog","telegraf_logs"]
109131
processors:
110132
- enum:
111133
mapping:
@@ -140,6 +162,19 @@ releases:
140162
tags:
141163
accountId: "{{ .Values.accountId }}_{{ .Values.name }}"
142164
chart: "telegraf-{{ .Values.telegrafHelmVersion }}"
165+
- tail:
166+
files:
167+
- "/tmp/telegraf.log"
168+
from_beginning: false
169+
data_format: "grok"
170+
grok_patterns:
171+
- "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:loglevel}! %{GREEDYDATA:message}"
172+
grok_custom_patterns:
173+
LOGLEVEL [IWE]
174+
name_override: "telegraf_logs"
175+
tags:
176+
accountId: "{{ .Values.accountId }}_{{ .Values.name }}"
177+
appname: "telegraf"
143178

144179
- name: {{ .Values.superstreamReleaseName }}
145180
installed: true

version.conf

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
1.4.0
1+
1.5.0

0 commit comments

Comments
 (0)