Skip to content

Commit cb73ace

Browse files
Merge branch 'master' into testing
2 parents 45fbc9d + e45b7d9 commit cb73ace

File tree

84 files changed

+7084
-42
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

84 files changed

+7084
-42
lines changed
Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
---
2+
title: "Comparing Standard Library Sorts: The Impact of Parallelism"
3+
date: 2024-01-30T19:16:00.000Z
4+
externalLink: https://chapel-lang.org/blog/posts/std-sort-performance/
5+
author: Michael Ferguson
6+
authorimage: https://chapel-lang.org/blog/authors/michael-ferguson/photo.jpg
7+
disable: false
8+
tags:
9+
- opensource
10+
- Python
11+
- chapel
12+
---
13+
External blog

content/blog/dark-mode-theming-in-grommet-how-to-set-up-and-apply-a-theme.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
---
22
title: "Dark Mode Theming in Grommet - Part 1: How to set up and apply a theme"
33
date: 2023-12-15T15:45:24.933Z
4-
featuredBlog: true
5-
priority: 3
4+
featuredBlog: false
5+
priority: 9
66
author: Matt Glissmann
77
authorimage: /img/blogs/Avatar6.svg
88
thumbnailimage: null
Lines changed: 239 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,239 @@
1+
---
2+
title: Deploying Cribl Stream Containers on HPE GreenLake for Private Cloud Enterprise
3+
date: 2024-01-17T21:25:38.155Z
4+
author: Elias Alagna & Kam Amir
5+
authorimage: /img/Avatar1.svg
6+
disable: false
7+
tags:
8+
- ezmeral
9+
- hpe-ezmeral-container-platform
10+
- hpe-ezmeral-data-fabric
11+
- kubernetes
12+
- hpe-greenlake
13+
- as-a-service
14+
- PCE
15+
- Private Cloud Enterprise
16+
- logging
17+
- Splunk
18+
- HPE GreenLake
19+
- Cribl
20+
- hpe-ezmeral
21+
---
22+
Hewlett Packard Enterprise and [Cribl](https://cribl.io/) bring together breakthrough technology to optimize and modernize observability data management, offering new levels of performance and platform independence.
23+
24+
The challenges of security and log management are only partly solved by existing software solutions. HPE and Cribl address the remaining problems of optimizing, routing, and replaying logs to provide independence from the industry’s software products in this space. HPE provides a robust way to run multiple log management software solutions and the Cribl Stream in a modern, easy-to-use, and robust platform. Together, HPE and Cribl reduce the total cost of ownership of log management systems by optimizing the software, accelerating the infrastructure, and reducing management costs.
25+
26+
Cribl Stream is an observability and data streaming platform for real-time processing of logs, metrics, traces, and observability data that enables the ITops, SRE, SecOps and observability teams to collect the data they want, shape the data in the formats they need, route the data wherever they want it to go, and replay data on-demand; thereby enabling customers to observe more and spend less, to have choice and flexibility, and to provide control over their data. HPE GreenLake is a private and hybrid cloud service that delivers the benefits of public cloud to your on-premises environment.
27+
28+
Cribl software can be deployed as stand alone software or run on a fully managed HPE GreenLake platform to offer further ease-of-use for organizations that want the benefits of cloud in an on-premise private cloud offering.
29+
30+
Deploying Cribl Stream containers on HPE GreenLake is a simple and effective way to implement a vendor-agnostic observability pipeline. Cribl Stream containers offer a number of advantages, including agility, cost savings, security, and management simplicity. [Cribl software](https://www.hpe.com/us/en/software/marketplace/cribl-stream.html) is available in the[ HPE GreenLake Marketplace](https://www.hpe.com/us/en/software/marketplace.html).
31+
32+
Deploying Cribl Stream containers on HPE GreenLake offers a number of advantages, including:
33+
34+
* **Agility:** Cribl Stream containers can be deployed quickly and easily on HPE GreenLake, giving you the agility to scale your observability pipeline up or down as needed.
35+
* **Cost savings:** Cribl Stream containers can help you reduce the cost of your observability pipeline by optimizing your data storage and processing through data reduction, data normalization and log routing.
36+
* **Security:** Cribl Stream containers can help you secure your data by encrypting it at rest and in transit.
37+
* **Management simplicity:** HPE GreenLake provides a single management console for managing your Cribl Stream containers, making it easy to keep your observability pipeline running smoothly.
38+
39+
![Cribl architecture diagram](/img/cribl-on-hpe-architecture.png "Cribl architecture")
40+
41+
#### Prerequisites
42+
43+
Before you deploy Cribl Stream containers on HPE GreenLake, you will need to:
44+
45+
* Have an active HPE GreenLake agreement and deployed HPE GreenLake for Private Cloud Enterprise and an account on [https://common.cloud.hpe.com/](https://common.cloud.hpe.com/).
46+
* Install the HPE Ezmeral Runtime Enterprise [Kubectl executable](https://docs.ezmeral.hpe.com/runtime-enterprise/56/reference/kubernetes/tenant-project-administration/Dashboard__Kubernetes_TenantProject_Administrator.html).
47+
* Create a HPE Ezmeral Runtime Enterprise [Kubernetes cluster](https://youtu.be/HSYWa2MalF4).
48+
* Install the Cribl Stream [Kubernetes operator](https://docs.cribl.io/stream/getting-started-guide/).
49+
50+
Steps to deploy Cribl Stream containers on HPE GreenLake:
51+
52+
1. Create a Cribl Stream deployment file. This file will specify the Cribl Stream containers that you want to deploy, as well as the resources that they need.
53+
2. Deploy the Cribl Stream containers to your HPE GreenLake cluster using the Cribl Stream Kubernetes operator.
54+
3. Verify that the Cribl Stream containers are running and healthy.
55+
4. Configure Cribl Stream to collect and process your data.
56+
5. Send your data to your analysis platform of choice.
57+
58+
#### Example deployment file
59+
60+
The following example deployment file deploys a Cribl Stream container that collects and processes logs from a Kubernetes cluster:
61+
62+
```yaml
63+
apiVersion: apps/v1
64+
kind: Deployment
65+
metadata:
66+
name: cribl-stream
67+
spec:
68+
replicas: 1
69+
selector:
70+
matchLabels:
71+
app: cribl-stream
72+
template:
73+
metadata:
74+
labels:
75+
app: cribl-stream
76+
spec:
77+
containers:
78+
- name: cribl-stream
79+
image: cribl/cribl-stream:latest
80+
ports:
81+
- containerPort: 9000
82+
volumeMounts:
83+
- name: cribl-stream-config
84+
mountPath: /etc/cribl-stream
85+
volumes:
86+
- name: cribl-stream-config
87+
configMap:
88+
name: cribl-stream-config
89+
```
90+
91+
#### Deploying Cribl Stream using Helm Charts
92+
93+
The Cribl Stream helm charts can be found on github (<https://github.com/criblio/helm-charts>). This assumes that the namespace is set to `cribl`.
94+
95+
Log into cloud CLI or jump box and issue the following commands:
96+
97+
```shell
98+
export KUBECONFIG=<path_to_kube_settings>
99+
kubectl get nodes -n cribl
100+
kubectl get svc -n cribl
101+
```
102+
103+
Label the leader node and the worker nodes:
104+
105+
```shell
106+
kubectl label nodes <leader_node> stream=leader
107+
kubectl label nodes <worker_node> stream=worker
108+
```
109+
110+
Validate by running:
111+
112+
```shell
113+
kubectl get nodes --show-labels
114+
```
115+
116+
Create and modify the `values.yaml` file for workers and leader nodes. For the leader nodes, create a file named `Leader_values.yaml` and modify line 97:
117+
118+
```yaml
119+
nodeSelector:
120+
stream: leader
121+
```
122+
123+
For the worker nodes, create a file named `Worker_values.yaml` and modify line 97:
124+
125+
```yaml
126+
nodeSelector:
127+
stream: worker
128+
```
129+
130+
Next, set the labels for your workers and leader node.
131+
132+
To do this, you'll first need to get a list of all the nodes and the labels associated with them.
133+
134+
```shell
135+
kubectl get nodes --show-labels
136+
```
137+
138+
Now, identify the nodes and make sure to label the nodes according to their role for this deployment.
139+
140+
Here is an example of setting the host `k8s-cribl-master-t497j-92m66.gl-hpe.net` as a leader:
141+
142+
```shell
143+
kubectl label nodes k8s-cribl-master-t497j-92m66.gl-hpe.net stream=leader
144+
```
145+
146+
Here is an example of setting the host `k8s-cribl-wor8v32g-cdjdc-8tkhn.gl-hpe.net` as a worker node:
147+
148+
```shell
149+
kubectl label nodes k8s-cribl-wor8v32g-cdjdc-8tkhn.gl-hpe.net stream=worker
150+
```
151+
152+
If you accidentally label a node and want to remove or overwrite the label, you can use this command:
153+
154+
```shell
155+
kubectl label nodes k8s-cribl-wor8v32g-cdjdc-876nq.gl-hpe.net stream=worker --overwrite=true
156+
```
157+
158+
Once the labels have been set, you are ready to run the helm command and deploy Cribl Stream on your environment. The first command will deploy the Cribl Leader node:
159+
160+
```shell
161+
helm install --generate-name cribl/logstream-leader -f leader_values.yaml -n cribl
162+
```
163+
164+
When successful, you will see output similar to what's shown below:
165+
166+
```shell
167+
NAME: logstream-leader-1696441333
168+
LAST DEPLOYED: Wed Oct 4 17:42:16 2023
169+
NAMESPACE: default
170+
STATUS: deployed
171+
REVISION: 1
172+
TEST SUITE: None
173+
```
174+
175+
Note that this will deploy the leader node with the parameters found in the `leader_values.yaml` file and into the namespace `cribl`.
176+
177+
Next, deploy the worker nodes using the `worker_values.yaml` file into the namespace `cribl`.
178+
179+
```shell
180+
helm install --generate-name cribl/logstream-workergroup -f workers_values.yaml
181+
```
182+
183+
When successful, you will see a similar output like the one below:
184+
185+
```shell
186+
NAME: logstream-workergroup-1696441592
187+
LAST DEPLOYED: Wed Oct 4 17:46:36 2023
188+
NAMESPACE: default
189+
STATUS: deployed
190+
REVISION: 1
191+
TEST SUITE: None
192+
```
193+
194+
Now you can validate the deployment by running the following command:
195+
196+
```shell
197+
kubectl get svc
198+
```
199+
200+
You should see the following results:
201+
202+
```shell
203+
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
204+
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 22d
205+
logstream-leader-1696441333 LoadBalancer 10.111.152.178 <pending> 9000:31200/TCP 9m56s
206+
logstream-leader-1696441333-internal ClusterIP 10.105.14.164 <none> 9000/TCP,4200/TCP 9m56s
207+
logstream-workergroup-1696441592 LoadBalancer 10.102.239.137 <pending> 10001:30942/TCP,9997:32609/TCP,10080:32174/TCP,10081:31898/TCP,5140:30771/TCP,8125:31937/TCP,9200:32134/TCP,8088:32016/TCP,10200:32528/TCP,10300:30836/TCP 5m35s
208+
```
209+
210+
Note: the names and IP addresses will differ from the above example. To test that the deployment was successful, you can run the following command and log into your deployment using the localhost and port 9000:
211+
212+
```shell
213+
kubectl port-forward service/logstream-leader-1696441333 9000:9000 &
214+
```
215+
216+
#### Uninstalling Cribl using Helm
217+
218+
You can uninstall the Cribl deployment for both the leader and worker nodes by running the following commands respectively:
219+
220+
```shell
221+
helm uninstall logstream-leader-1696441333 -n default
222+
helm uninstall logstream-workergroup-1696441592 -n default
223+
```
224+
225+
Make sure to use your leader and worker group name when uninstalling Cribl from your deployment.
226+
227+
#### Configuring Cribl Stream
228+
229+
Once you have [deployed the Cribl Stream](https://docs.cribl.io/stream/deploy-kubernetes-leader/) containers, you need to configure them to collect and process your data. You can do this by editing the Cribl Stream configuration file. The Cribl Stream documentation provides detailed instructions on how to configure Cribl Stream.
230+
231+
#### Sending your data to your analysis platform of choice
232+
233+
Once you have configured Cribl Stream to collect and process your data, you need to send it to your analysis platform of choice. Cribl Stream supports a wide range of analysis platforms, including Elasticsearch, Splunk, and Kafka.
234+
235+
#### Conclusion
236+
237+
For more information on Cribl Stream, check out [Optimized Enterprise Logging Solution With HPE Ezmeral And Cribl Business white paper](https://www.hpe.com/psnow/doc/a50006507enw).
238+
239+
For more blog posts related to HPE Ezmeral Software, keep coming back to the HPE Developer Community blog and search on HPE Ezmeral.
Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
---
2+
title: Finetuning an LLM using HuggingFace + Determined
3+
date: 2024-01-31T15:29:49.692Z
4+
externalLink: https://www.determined.ai/blog/llm-finetuning
5+
author: Kevin Musgrave and Agnieszka Ciborowska
6+
authorimage: /img/kevinmusgrave-profilepic-small.jpg
7+
thumbnailimage: /img/determined-llm-finetuning.jpeg
8+
disable: false
9+
tags:
10+
- determined-ai
11+
---
12+
External blog post

0 commit comments

Comments
 (0)