You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/modules/demos/pages/logging.adoc
+10-19Lines changed: 10 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,16 +26,6 @@ To run this demo, your system needs at least:
26
26
27
27
If you use MacOS or Windows and use Docker to run Kubernetes, set the RAM to at least 4 GB in _Preferences > Resources_.
28
28
29
-
==== Linux
30
-
31
-
OpenSearch uses a mmapfs directory by default to store its indices. The default operating system limits on mmap counts
32
-
are likely too low - usually 65530, which may result in out-of-memory exceptions. So, the Linux setting
33
-
`vm.max_map_count` on the host machine where the containers are running must be set to at least 262144.
34
-
35
-
This is automatically set by default in this demo (via the `setSysctlMaxMapCount` Stack parameter).
36
-
37
-
OpenSearch has more information about this setting in their https://opensearch.org/docs/2.12/install-and-configure/install-opensearch/index/#important-settings[documentation].
38
-
39
29
== Overview
40
30
41
31
This demo will
@@ -63,15 +53,16 @@ To list the installed Stackable services run the following command:
Copy file name to clipboardExpand all lines: docs/modules/demos/pages/nifi-kafka-druid-earthquake-data.adoc
-97Lines changed: 0 additions & 97 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -86,103 +86,6 @@ $ stackablectl stacklet list
86
86
87
87
include::partial$instance-hint.adoc[]
88
88
89
-
== Inspect the data in Kafka
90
-
91
-
Kafka is an event streaming platform to stream the data in near real-time.
92
-
All the messages put in and read from Kafka are structured in dedicated queues called topics.
93
-
The test data will be put into a topic called earthquakes.
94
-
The records are produced (written) by the test data generator and consumed (read) by Druid afterwards in the same order they were created.
95
-
96
-
As Kafka has no web interface, you must use a Kafka client like {kcat}[kcat].
97
-
Kafka uses mutual TLS, so clients wanting to connect to Kafka must present a valid TLS certificate.
98
-
The easiest way to obtain this is to shell into the `kafka-broker-default-0` Pod, as we will do in the following section for demonstration purposes.
99
-
For a production setup, you should spin up a dedicated Pod provisioned with a certificate acting as a Kafka client instead of shell-ing into the Kafka Pod.
100
-
101
-
=== List the available Topics
102
-
103
-
You can execute a command on the Kafka broker to list the available topics as follows:
104
-
105
-
// In the following commands the kcat-prober container instead of the kafka container is used to send requests to Kafka.
106
-
// This is necessary because kcat cannot use key- and truststore files with empty passwords, which are mounted here to the kafka container.
107
-
// However, the kcat-prober container has TLS certificates mounted, which can be used by kcat to connect to Kafka.
If you calculate `379,000` records * `8` partitions, you end up with ~ 3,032,000 records.
183
-
The output also shows that the last measurement record was produced at the timestamp `1752584024936`, which translates to `Tuesday, 15 July 2025 14:53:44.936 GMT+02:00`
184
-
(using e.g. the command `date -d @1752584024`).
185
-
186
89
== NiFi
187
90
188
91
NiFi is used to fetch earthquake data from the internet and ingest it into Kafka.
0 commit comments