Skip to content

Commit 33b320e

Browse files
committed
Update Blog “deploying-cribl-stream-containers-on-hpe-greenlake”
1 parent df7c8b0 commit 33b320e

File tree

1 file changed

+114
-5
lines changed

1 file changed

+114
-5
lines changed

content/blog/deploying-cribl-stream-containers-on-hpe-greenlake.md

Lines changed: 114 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ Deploying Cribl Stream containers on HPE GreenLake offers a number of advantages
2525

2626
![Cribl architecture diagram](/img/criblarchitecure.png "Cribl architecture")
2727

28-
## Prerequisites
28+
#### Prerequisites
2929

3030
Before you deploy Cribl Stream containers on HPE GreenLake, you will need to:
3131

@@ -81,22 +81,22 @@ The Cribl Stream helm charts can be found on github (<https://github.com/criblio
8181

8282
Log into cloud CLI or jump box and issue the following commands:
8383

84-
```yaml
84+
```shell
8585
export KUBECONFIG=<path_to_kube_settings>
8686
kubectl get nodes -n cribl
8787
kubectl get svc -n cribl
8888
```
8989

9090
Label the leader node and the worker nodes:
9191

92-
```yaml
92+
```shell
9393
kubectl label nodes <leader_node> stream=leader
9494
kubectl label nodes <worker_node> stream=worker
9595
```
9696

9797
Validate by running:
9898

99-
```yaml
99+
```shell
100100
kubectl get nodes --show-labels
101101
```
102102

@@ -112,4 +112,113 @@ For the worker nodes, create a file named `Worker_values.yaml` and modify line 9
112112
```yaml
113113
nodeSelector:
114114
stream: worker
115-
```
115+
```
116+
117+
Next set the labels for your workers and leader node.
118+
119+
First get a list of all the nodes and see the labels associated with them:
120+
121+
```shell
122+
kubectl get nodes --show-labels
123+
```
124+
125+
Now, identify the nodes and make sure to label the nodes according to their role for this deployment.
126+
127+
Here is an example of setting the host `k8s-cribl-master-t497j-92m66.gl-hpe.net` as a leader:
128+
129+
```shell
130+
kubectl label nodes k8s-cribl-master-t497j-92m66.gl-hpe.net stream=leader
131+
```
132+
133+
Here is an example of setting the host `k8s-cribl-wor8v32g-cdjdc-8tkhn.gl-hpe.net` as a worker node:
134+
135+
```shell
136+
kubectl label nodes k8s-cribl-wor8v32g-cdjdc-8tkhn.gl-hpe.net stream=worker
137+
```
138+
139+
If you accidentally label a node and want to remove or overwrite the label, you can use this command:
140+
141+
```shell
142+
kubectl label nodes k8s-cribl-wor8v32g-cdjdc-876nq.gl-hpe.net stream=worker --overwrite=true
143+
```
144+
145+
Once the labels have been set, now you are ready to run the helm command and deploy Cribl Stream on your environment. The first command will deploy the Cribl Leader node:
146+
147+
```shell
148+
helm install --generate-name cribl/logstream-leader -f leader_values.yaml -n cribl
149+
```
150+
151+
When successful, you will see a similar output like the one below:
152+
153+
```shell
154+
NAME: logstream-leader-1696441333
155+
LAST DEPLOYED: Wed Oct 4 17:42:16 2023
156+
NAMESPACE: default
157+
STATUS: deployed
158+
REVISION: 1
159+
TEST SUITE: None
160+
```
161+
162+
Note that this will deploy the leader node with the parameters found in the `leader_values.yaml` file and into the namespace `cribl`.
163+
164+
Next, deploy the worker nodes using the `worker_values.yaml` file into the namespace `cribl`.
165+
166+
```shell
167+
helm install --generate-name cribl/logstream-workergroup -f workers_values.yaml
168+
```
169+
170+
When successful, you will see a similar output like the one below:
171+
172+
```shell
173+
NAME: logstream-workergroup-1696441592
174+
LAST DEPLOYED: Wed Oct 4 17:46:36 2023
175+
NAMESPACE: default
176+
STATUS: deployed
177+
REVISION: 1
178+
TEST SUITE: None
179+
```
180+
181+
Now you can validate the deployment by running the following command:
182+
183+
```shell
184+
kubectl get svc
185+
```
186+
187+
You should see the following results:
188+
189+
```shell
190+
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
191+
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 22d
192+
logstream-leader-1696441333 LoadBalancer 10.111.152.178 <pending> 9000:31200/TCP 9m56s
193+
logstream-leader-1696441333-internal ClusterIP 10.105.14.164 <none> 9000/TCP,4200/TCP 9m56s
194+
logstream-workergroup-1696441592 LoadBalancer 10.102.239.137 <pending> 10001:30942/TCP,9997:32609/TCP,10080:32174/TCP,10081:31898/TCP,5140:30771/TCP,8125:31937/TCP,9200:32134/TCP,8088:32016/TCP,10200:32528/TCP,10300:30836/TCP 5m35s
195+
```
196+
197+
Note, the names and IP addresses will differ from the above example. To test that the deployment was successful, you can run the following command and log into your deployment using the localhost and port 9000:
198+
199+
```shell
200+
kubectl port-forward service/logstream-leader-1696441333 9000:9000 &
201+
```
202+
203+
#### Uninstalling Cribl using Helm
204+
205+
You can uninstall the Cribl deployment for both the leader and worker nodes by running the following commands respectively:
206+
207+
```shell
208+
helm uninstall logstream-leader-1696441333 -n default
209+
helm uninstall logstream-workergroup-1696441592 -n default
210+
```
211+
212+
Make sure to use your leader and worker group name when uninstalling Cribl from your deployment.
213+
214+
#### Configuring Cribl Stream
215+
216+
Once you have deployed the Cribl Stream containers, you need to configure them to collect and process your data. You can do this by editing the Cribl Stream configuration file. The Cribl Stream documentation provides detailed instructions on how to configure Cribl Stream.
217+
218+
#### Sending your data to your analysis platform of choice
219+
220+
Once you have configured Cribl Stream to collect and process your data, you need to send it to your analysis platform of choice. Cribl Stream supports a wide range of analysis platforms, including Elasticsearch, Splunk, and Kafka.
221+
222+
#### Conclusion
223+
224+
Deploying Cribl Stream containers on HPE GreenLake is a simple and effective way to implement a vendor-agnostic observability pipeline. Cribl Stream containers offer a number of advantages, including agility, cost savings, security, and management simplicity.

0 commit comments

Comments
 (0)